- I opened up a cheap 600W charger to test its build, and found 'goo' inside
- How to negotiate like a pro: 4 secrets to success
- One of the cheapest Android tablets I've ever tested replaced my iPad with no sweat
- I use this cheap Android tablet more than my iPad Pro - and don't regret it
- The LG soundbar made my home audio sound like a theater - even though it's not the newest model
Google's DeepMind AI takes home silver medal in complex math competition

Today’s artificial intelligence (AI) systems possess many skills but typically fall short when it comes to tackling complex math problems. That’s why Google is excited that two of its DeepMind AI systems were able to solve several challenging problems posed in a prestigious math competition.
In a new post published Thursday, Google touted the AI smarts and achievements of its DeepMind AlphaProof and AlphaGeometry 2 AI models. Entering the 2024 International Mathematical Olympiad (IMO), the two systems solved four out of six problems. That effort rewarded Google’s AI with the same level as a silver medalist for the first time in this contest, which is typically geared toward young mathematicians.
Also: OpenAI launches SearchGPT – here’s what it can do and how to access it
Each year, IMO invites elite pre-college mathematicians to wrestle with six extremely difficult problems in algebra, combinatorics (counting, selecting, and arranging a large number of objects), geometry, and number theory. Branching out beyond humans, the competition has also become a way to test and measure machine learning and AI systems in advanced mathematical reasoning.
With the problems translated into a formal language understood by Google’s AI, AlphaProof solved two algebra problems and one problem in number theory, not only finding the answer but also proving that the answer was correct. Google cited the number theory challenge as the hardest one in the competition, solved by only five of the human contestants. AlphaGeometry 2 figured out the geometry problem. But neither model was able to crack the two combinatorics problems.
AlphaProof is an AI-based system that can train itself to prove mathematical statements using the formal language Lean. Combining a pre-trained language model with the AlphaZero reinforcement learning algorithm, AlphaProof previously taught itself how to play and win at chess, shogi, and Go.
Also: Google’s new math app solves nearly any problem with AI: Here’s how to use it
AlphaGeometry 2 is an improved version of AlphaGeometry. Based on Google’s Gemini AI, this model can handle highly challenging geometry problems, including those that cover movements of objects and equations of angles, ratios, and distances.
Beyond testing the math skills of AlphaProof and AlphaGeometry 2, Google took advantage of IMO to try out a natural language reasoning system built on Gemini with advanced problem-solving capabilities. Unlike the other two models, this one doesn’t require problems to be translated into a formal language.
Though the achievement of these models may sound abstract, Google sees it as another step toward the future of AI.
“We’re excited for a future in which mathematicians work with AI tools to explore hypotheses, try bold new approaches to solving long-standing problems, and quickly complete time-consuming elements of proofs — and where AI systems like Gemini become more capable at math and broader reasoning,” the company said in its post.