Bias found in leading AI models – The Australian Jewish News
The Anti-Defamation League (ADL) has conducted a thorough evaluation of anti-Jewish and anti-Israel bias in prominent artificial intelligence (AI) systems, revealing troubling findings across all platforms assessed.

Comprehensive Evaluation by ADL
According to the study by ADL’s Centre for Technology and Society, four major AI models were examined: GPT by OpenAI, Claude by Anthropic, Gemini by Google, and Llama by Meta. The research uncovered concerning patterns of bias, misinformation, and selective engagement on issues concerning Jewish people.
Findings of the Report
The report highlighted that Meta’s Llama exhibited the most pronounced anti-Jewish and anti-Israel biases. Additionally, GPT and Claude displayed significant anti-Israel bias, especially concerning responses about the Israel-Hamas conflict.

One concerning pattern identified was the AI systems' reluctance to answer questions about Israel compared to other topics. Moreover, they showed an inability to accurately reject antisemitic tropes and conspiracy theories.
Recommendations by ADL
ADL's recommendations include conducting rigorous pre-deployment testing and carefully evaluating the usefulness, reliability, and potential biases of training data. The AI models were queried 8,600 times each, resulting in a total of 34,400 responses, marking the initial phase of ADL's broader examination of LLMs and antisemitic bias.

The findings emphasize the necessity for enhanced safeguards and mitigation strategies within the AI industry to address these biases as AI technologies continue to influence public discourse.
Get The AJN Newsletter by email and never miss our top stories