
A Maze the Size of Earth: New AI Tackles Math Problems That Take Millions of Steps
gizmodo.com
By Margherita Bassi Published February 15, 2025 | Comments (0) | AI-generated art representing a chess game requiring thousands or millions of moves to win. Sergei Gukov (This graphic includes AI-generated art) Researchers have developed an artificially intelligent system that does the exact opposite of living in the moment. But it doesnt just think a few steps aheadit thinks millions of steps ahead. A team led by mathematician Sergei Gukov from the California Institute of Technology (Caltech) has created a new type of machine-learning algorithm designed to solve math problems that necessitate an extremely long series of steps. Like a really long series of steps; were talking a million steps or more. Specifically, the AI was able to make progress on a complex problem called the AndrewsCurtis conjecture, which has stumped mathematicians for decades. The conjecture basically asks: Can certain math puzzles always be solved using a set of allowed moves, like rearranging or undoing steps? Its like trying to find your way through a maze the size of Earth. These are very long paths that you have to test out, and theres only one path that works. To that end, the new Caltech program sought to find long sequences of steps that are rare and hard to find, Ali Shehper, first author of the study and a mathematician at Rutgers University, said in a Caltech statement. Its like trying to find your way through a maze the size of Earth. These are very long paths that you have to test out, and theres only one path that works. In a preprint study posted on arXiv last August and updated on Tuesday, Shehper and his colleagues detail how they used their newly developed AI to solve families of problems related to the AndrewsCurtis conjecture, which involves abstract algebra. To be clear, they didnt solve the conjecture itself. While that might seem anticlimactic, the researchers did disprove ongoing potential counterexamples to the conjecture. While disproving counterexamples doesnt necessarily make the original conjecture true, it does bolster it.Ruling out some of the counterexamples gives us confidence in the validity of the original conjecture and helps build our intuition about the main problem, Shehper explained. It gives us new ways to think about it. Gukov compared the math problems to the Rubiks Cube. Can you take this scrambled, complicated Rubiks Cube and get it back to its original state? You have to test out these very long sequences of moves, and you wont know if you are on the right path until the very end, he explained.So how does the AI do it? Basically, by thinking outside of the box. Following a reinforcement learning approach, the researchers trained the AI by first feeding it easy math problems followed by increasingly difficult tasks.It tries various moves and gets rewarded for solving the problems, said Shehper. We encourage the program to do more of the same while still keeping some level of curiosity. In the end, it develops new strategies that are better than what humans can do. Thats the magic of reinforcement learning. The algorithm ultimately learned to generate long sequences of unexpected moves, which the researchers termed super moves. In contrast, ChatGPTs output is much more boring. If you ask ChatGPT to write a letter, it will come up with something typical. Its unlikely to come up with anything unique and highly original. Its a good parrot, said Gukov. Our program is good at coming up with outliers.Basically, our program knows how to learn to learn. I can think of at least one outlier event that would be really convenient for an AI to predict: financial crashes. But while current machine learning programs havent achieved this level of prognostic sophistication, the researchers speculate that their methods could one day contribute to that sort of intelligent forecasting. Basically, our program knows how to learn to learn, Gukov explained. Its thinking outside the box. He added that the team had made significant improvements in an area of math that was decades old. Whats more, Gukov and his colleagues have prioritized approaches that do not need large amounts of computing power, making their work accessible to other academics with small-scale computers.Though the practical applications of this achievement might not be evident in our day-to-day lives, their work joins a host of other researchers optimizing machine-learning algorithms to solve humanitys problems (not to destroy our civilization).Daily NewsletterYou May Also Like By Lucas Ropek Published February 14, 2025 By AJ Dellinger Published February 13, 2025 By Lucas Ropek Published February 11, 2025 By AJ Dellinger Published February 11, 2025 By AJ Dellinger Published February 10, 2025 By Lucas Ropek Published February 9, 2025
0 Comentários
·0 Compartilhamentos
·102 Visualizações