Googles Gemini 2.5 Pro could be the most important AI model so far this year
www.fastcompany.com
Google released its new Gemini 2.5 Pro Experimental AI model late last month, and its quickly stacked up top marks on a number of coding, math, and reasoning benchmark testsmaking it a contender for the worlds best model right now. Gemini 2.5 Pro is a reasoning model, meaning its answers derive from a mix of training data and real-time reasoning performed in response to the user prompt or question. Like other newer models, Gemini 2.5 Pro can consult the web, but it also contains a fairly recent snapshot of the worlds knowledge: Its training data cuts off at the end of January 2025.Last year, in order to boost model performance, AI researchers began shifting toward teaching models to reason when theyre live and responding to user prompts. This approach requires models to process and retain increasingly more data to arrive at accurate answers. (Gemini 2.5 Pro, for example, can handle up to a million tokens.) However, models often struggle with information overload, making it difficult to extract meaningful insights from all that context.Google appears to have made progress on this front. The YouTube channel AI Explained points out that Gemini 2.5 fared very well on a new benchmark test called Fiction.liveBench thats designed to test a models ability to remember and comprehend context information. For instance, Fiction.liveBench might ask the model to read a novelette and answer questions that require a deep understanding of the story and characters. Some of the top models, including those from OpenAI and Anthropic, score well when the amount of stored data (the context window) is relatively small. But as the context window increases to 32K, then 60K, then 120Kabout the size of a noveletteGemini 2.5 Pro stands out for its superior comprehension.Thats important because some of the most productive use cases to date for generative AI involve comprehending and summarizing large amounts of data. A service representative might depend on an AI tool to swim through voluminous manuals in order to help someone struggling with a technical problem out in the field, or a corporate compliance officer might need a long context window to sift through years of regulations and policies.Gemini also scored much higher than competing reasoning models on a new benchmark called MathArena, which tests models using hard questions from recent math Olympiads and contests. The test also requires that the model clearly show its reasoning as it steps toward an answer. Top models from OpenAI, Anthropic, and DeepSeek failed to break 5% of a perfect score, but Gemini 2.5 Pro model scored an impressive 24.4%.The new Google model also scored high on another superhard benchmark called Humanitys Last Exam, which is meant to show when AI models exceed the knowledge and reasoning of top experts in a given field. The Gemini 2.5 scored an 18.8%, a score topped only by OpenAIs Deep Research model. The model also now sits atop the crowdsourced benchmarking leaderboard, LMArena.Finally, Gemini 2.5 Pro is among the top models for computer coding. It scored a 70.4% on the LiveCodeBench benchmark, coming in just behind OpenAIs o3-mini model, which scored 74.1%. Gemini 2.5 Pro scored 63.8% on SWE-bench (measures agentic coding), while Anthropics latest Claude 3.7 Sonnet scored 70.3%. Finally, Googles model outscored Anthropic, OpenAI, and xAI models on the MMMU visual reading test by roughly 6 points.Google initially released its new model to paying subscribers but has now made it accessible by all users for free.
0 Comments ·0 Shares ·21 Views