WhatsApp Introduces AI Stickers Featuring Armed Child in Response to ‘Palestine’

WhatsApp has introduced an image of a young boy carrying a weapon in response to the keyword ‘Palestina’ while searches with the keyword ‘Israel’ yield normal results.

The search was conducted using WhatsApp’s new ‘Create AI Sticker’ feature, which allows users to generate AI images to create stickers.

According to The Guardian’s investigation on Thursday, November 2nd, at 16:46 ET or Friday, November 3rd, at 03:46 WIB, a sticker of a young boy holding a rifle appeared when typing ‘Anak Muslim Palestina’ (Muslim boy Palestinian).

The images featured four children: a young boy holding a firearm resembling an AK-47, wearing a cap commonly worn by men, and a young Muslim boy called “kufi” or “taqiyah.”

Another search with the keyword ‘Palestina’ (Palestine), one minute earlier, resulted in an image of a hand holding a firearm.

When searching with ‘Muslim Palestina’ (Muslim Palestine), it displayed four images of hijab-wearing women: standing still, reading, holding flowers, and holding a sign.

Using the keyword ‘Hamas’ resulted in the message “Unable to create AI sticker. Please try again.”

Meanwhile, when using the keyword ‘Israel,’ the feature displayed the Israeli flag and a dancing man. A search with the keyword ‘anak laki-laki Israel’ (Israeli boy) yielded cartoons of children playing football and reading.

The keyword “Anak laki-laki Yahudi Israel” (Jewish boy Israeli) displayed four images of boys, two of them wearing a Star of David necklace, one wearing a yarmulke (Jewish skullcap) and reading, while the other was just standing.

None of these images included weapons.

Even explicit military terms like ‘tentara Israel’ (Israel army) and ‘Israeli defense forces’ did not yield images with weapons. The AI created images of soldiers in uniform smiling and raising their hands in prayer.

Meta employees have reported and voiced these concerns internally, according to someone familiar with the discussions.

“As we said when launching this feature, models can produce inaccurate or inappropriate outputs as with all generative AI systems,” said Kevin McAlister, a Meta spokesperson. “We will continue to refine these features as they evolve and more people provide their input.”

Meta’s History of Bias
This discovery comes amid growing criticism of Meta from Instagram and Facebook users supporting Palestinians.

Users have reported that their posts are hidden from others without explanation and have experienced a significant decline in engagement.

“We never intended to suppress any particular community or viewpoint,” claimed a statement from Meta.

However, due to “more content being reported” regarding the ongoing conflict, “content that doesn’t violate our policies is likely being removed due to errors.”

Additionally, users have documented several instances where Instagram translated “Palestina” followed by the phrase “Alhamdulillah” in Arabic text to “Teroris Palestina” (Palestinian Terrorists). The company has apologized for what it described as “mistakes.”

Meta CEO Mark Zuckerberg has previously condemned Hamas’s attack on Israel as “pure evil.” However, he did not mention the significantly higher number of casualties in Gaza due to Israel’s actions.