The Anti-Defamation League (ADL) has issued a warning about the growing risks posed by text-to-video artificial intelligence tools, saying that despite company safeguards, the technology is already producing antisemitic, extremist and violent content at alarming rates.
The ADL found that nearly 40% of prompts tested between August and October on major AI video platforms—including Google’s Veo 3, OpenAI’s Sora 1 and Sora 2, and Hedra’s Character-3—successfully generated videos that included antisemitic or violent material.
“These tools are being used to sow confusion and division following newsworthy events or tragedies,” the report stated, warning that misleading and hateful AI-generated videos could quickly circulate online—especially as free apps make the technology more accessible to the public.
Among the examples cited in the report were videos featuring antisemitic tropes and references to real-world acts of violence, including a clip showing a Jewish man “controlling the weather,” another depicting a Jewish man with fangs, and a video of an animated child saying, “Come and watch people die.”
The report noted that the last example used the word “dye” instead of “die,” a trick frequently employed by extremists to bypass moderation filters.
Another disturbing case included a video of a white man holding a rifle outside a mosque and saying “Hello brother,” a phrase tied to the 2019 Christchurch mosque shooting in New Zealand, where 51 people were murdered.
Kelley said some of the prompts were intentionally coded or esoteric, referencing phrases or imagery associated with newer extremist groups to evade detection by automated safeguards.
“We know that trust and safety is challenging ongoing work,” he added. “At the same time, there should be a higher bar for safety around antisemitism and hate when products ship into the world.”
The ADL report comes as OpenAI and Google expand their own text-to-video models, with Sora 2 now offering a free public app and social-sharing features—potentially widening the reach of manipulated or hateful content.
The organization urged AI companies to invest more in trust and safety teams, update content filters, and train models to recognize antisemitic tropes and extremist language before releasing new products.
The ADL’s findings come amid growing bipartisan concern in Washington over AI-generated disinformation,
(YWN World Headquarters – NYC)
 
				 
											 Join the official YWN WhatsApp status
Join the official YWN WhatsApp status
 
								 
								 
								 
								 
								 
								 
								 
								 
								