The students participating in the annual International Math Olympiad (IMO) represent some of the most talented young computational minds in the world. This year, they faced down a newly enhanced array of powerful AI models, including Google's Gemini Deep Think. The company says it put its model to the test using the same rules as human participants, and it improved on an already solid showing from last year.
Google says its specially tuned math AI got five of the six questions correct, which is good enough for gold medal status. And unlike OpenAI, Google played by the rules set forth by the IMO.
A new Gemini
The Google DeepMind team participated in last year's IMO competition using an AI composed of the AlphaProof and AlphaGeometry 2 models. This setup was able to get four of the six questions correct, earning silver medal status—only half of the human participants earn any medal at all.