Google has introduced its latest artificial intelligence model, “Gemini,” positioning it as a competitor to OpenAI’s GPT-4. According to Google, Gemini excels in math and specialized coding, distinguishing it from GPT-4, which lacks such capabilities. The Ultra version of Gemini reportedly achieves “state-of-the-art performance” across 30 out of 32 academic benchmarks and scores 90% on a massive multitask language understanding (MMLU) test, surpassing human expert performance. Google’s chief scientist, Jeff Dean, claims Gemini Ultra is the first model to achieve human-expert performance on MMLU across 57 subjects with a score above 90%.