A new study reveals that antisemitism has found a potent new vehicle in artificial intelligence systems, and the consequences could be dire for Jewish communities around the world.
Since the Anti‑Defamation League (ADL) documented a 316 percent jump in antisemitic incidents in the U.S. after October 7, 2023, alarm has spread beyond shul walls and into server farms.
“Antisemitism, one of humanity’s oldest hatreds, has found alarming new expression in the age of AI,” writes researcher Julia Senkfor in her paper “Antisemitism in the Age of Artificial Intelligence (AI).”
Senkfor’s work exposes an ecosystem in which biased data, weaponized digital platforms and tainted training sets conspire to embed antisemitic narratives into the next generation of large-language models (LLMs).
Senkfor cites recent analyses showing major LLMs such as GPT‑4o, Gemini, Claude and Llama routinely produce content targeting Jews with stereotypes, conspiracies, and outright hate.
“All four LLMs displayed concerning answers, with Meta’s Llama being the worst offender,” the report states. In one test, GPT-4o “produced significantly higher severely harmful outputs towards Jews than any other tested demographic group.”
Senkfor argues that what’s happening isn’t a glitch; it’s structural. Malicious actors are deliberately “spoiling” training data, injecting antisemitic content into datasets to bias AI outputs. In a starker example: a study found that as few as 250 malicious documents can create a “back-door” into a model, tainting its reference points and contaminating outputs even when the bulk of the dataset is ‘clean.’
Open-editing platforms such as Wikimedia Foundation’s Wikipedia are central to this problem. Because AI crawlers scrape such sites heavily, poisoned entries about Jews or Israel don’t just sit quietly – they cascade, influencing model behavior and search-engine citations alike. “Wikipedia is almost always weighed more heavily than other data sets,” Senkfor warns, meaning small corruptions can yield massive downstream effects.
Add to this the fact that human trust in AI runs dangerously high. Recent studies find nearly half of users believe AI models are at least somewhat smarter than themselves. That means when a model asserts antisemitic content as fact—even if subtly—the user may accept it without skepticism.
Senkfor’s research doesn’t stop at bias. It traces how extremist groups are actively weaponizing AI. Far-right networks are using generative image models to design antisemitic memes and “GAI-Hate” visuals. On platforms like 4chan, users share scripts and workflows to funnel AI systems into creating novel, damaging content. One example: a network launched 91 chatbots in January 2024, many programmed to deny the Holocaust or espouse “Great Replacement Theory.”
“Extremist groups are not merely passive consumers of biased AI outputs, but active exploiters who weaponize AI,” Senkfor writes.
The whip-fast rise of AI means the window to act is closing. Senkfor offers a list of policy recommendations: treat AI systems as products (with liability and consumer-protections) rather than platforms, expand laws like the STOP HATE Act to cover AI systems and give regulatory bodies such as the Federal Trade Commission (FTC) authority to investigate antisemitic outputs and foreign interference.
But regulatory momentum remains weak, and the industry is moving faster than lawmakers. In the meantime, Jewish communities and tech watchers are waking up to a hazard they had hoped might stay online-adjacent. Instead, it’s baked into the architecture of systems we rely on.
In many ways, the threat is subtle—an insidious algorithmic tilt rather than a loud hate-speech incident. Yet because AI can reinforce false narratives, rank-and-file users may never know they’re consuming propaganda. With user trust so high and model outputs treated as credible, the danger escalates.
(YWN World Headquarters – NYC)
One Response
This article raises important concerns about AI bias, but overstates how these systems actually behave. As a regular user of Claude and similar tools, the claim that they “routinely produce” antisemitic content doesn’t match reality. These systems have strong safeguards and actively refuse such requests.
There’s a difference between adversarial testing that occasionally breaks safety measures and routine system behavior. Both matter, but conflating them doesn’t help us address real problems.
AI bias is a legitimate concern that deserves serious attention. But accurate representation of how these systems work helps us have more productive conversations about meaningful solutions.