Gemini Pro 2.5 Dominates Competitors in AI Benchmark Tests

Published On Sat Jun 07 2025
Gemini Pro 2.5 Dominates Competitors in AI Benchmark Tests

New version of Gemini beats other AIs at math, science, and reasoning

Google's new Gemini Pro is leading the way in artificial intelligence, surpassing its competitors in reasoning, science, and coding. The latest benchmark results released by Google on Thursday showcase the superior performance of Gemini 2.5 Pro in various key areas.

147: Llama 4 Launch & Digesting Gemini Pro 2.5's Breakthrough Impact

Benchmark Results

According to Google's findings, Gemini 2.5 Pro outperforms other top competitors such as OpenAI o3, Claude Opus 4, and others in key benchmarks like Humanity's Last Exam, Aider Polyglot, and FACTS Grounding.

Leadership Position

With a remarkable score of 1470, Gemini 2.5 Pro has secured the top spot on the LMArena leaderboard, underscoring its dominance in the AI landscape.

Introducing Gemini: Google's most capable AI model yet

Availability

While the enhanced features of Gemini 2.5 Pro are generating buzz, the final version is not yet widely available. Google has labeled this release as an "upgraded preview," with a stable release expected to launch in the coming weeks. Interested users can access the preview version through the Gemini app.