- Why I recommend this Windows tablet for work travel over the iPad and Lenovo Yoga
- New wearable data could lead to early diagnosis of fertility issues - without needles
- AI data centers are becoming 'mind-blowingly large'
- The best floodlight and security camera combo I've tested is $70 off
- This pocket camera has fully replaced my iPhone for video shooting - and it's a must for traveling
Google's DeepMind AI takes home silver medal in complex math competition
Today’s artificial intelligence (AI) systems possess many skills but typically fall short when it comes to tackling complex math problems. That’s why Google is excited that two of its DeepMind AI systems were able to solve several challenging problems posed in a prestigious math competition.
In a new post published Thursday, Google touted the AI smarts and achievements of its DeepMind AlphaProof and AlphaGeometry 2 AI models. Entering the 2024 International Mathematical Olympiad (IMO), the two systems solved four out of six problems. That effort rewarded Google’s AI with the same level as a silver medalist for the first time in this contest, which is typically geared toward young mathematicians.
Also: OpenAI launches SearchGPT – here’s what it can do and how to access it
Each year, IMO invites elite pre-college mathematicians to wrestle with six extremely difficult problems in algebra, combinatorics (counting, selecting, and arranging a large number of objects), geometry, and number theory. Branching out beyond humans, the competition has also become a way to test and measure machine learning and AI systems in advanced mathematical reasoning.
With the problems translated into a formal language understood by Google’s AI, AlphaProof solved two algebra problems and one problem in number theory, not only finding the answer but also proving that the answer was correct. Google cited the number theory challenge as the hardest one in the competition, solved by only five of the human contestants. AlphaGeometry 2 figured out the geometry problem. But neither model was able to crack the two combinatorics problems.
AlphaProof is an AI-based system that can train itself to prove mathematical statements using the formal language Lean. Combining a pre-trained language model with the AlphaZero reinforcement learning algorithm, AlphaProof previously taught itself how to play and win at chess, shogi, and Go.
Also: Google’s new math app solves nearly any problem with AI: Here’s how to use it
AlphaGeometry 2 is an improved version of AlphaGeometry. Based on Google’s Gemini AI, this model can handle highly challenging geometry problems, including those that cover movements of objects and equations of angles, ratios, and distances.
Beyond testing the math skills of AlphaProof and AlphaGeometry 2, Google took advantage of IMO to try out a natural language reasoning system built on Gemini with advanced problem-solving capabilities. Unlike the other two models, this one doesn’t require problems to be translated into a formal language.
Though the achievement of these models may sound abstract, Google sees it as another step toward the future of AI.
“We’re excited for a future in which mathematicians work with AI tools to explore hypotheses, try bold new approaches to solving long-standing problems, and quickly complete time-consuming elements of proofs — and where AI systems like Gemini become more capable at math and broader reasoning,” the company said in its post.