www.forbes.com
How do we define AGI?Charles Towers-Clark using DALL-EMicrosoft and OpenAI recently sparked controversy by defining Artificial General Intelligence not through technical achievement, but through profit - specifically, $100 billion in annual earnings. This benchmark appears in their partnership agreement, essentially allowing Microsoft to retain access to OpenAI's innovations until this financial goal is met, likely well into 2029.It's jarring that such a profound technical milestone should be reduced to a monetary target. Yet whilst this lawyers' definition protects Microsoft's interests it raises a deeper question: What exactly is AGI, and how close are we to achieving it?"Whatever AGI is and how we define it, I think in every definition that is reasonable, we are still far away," DeepL CEO Jarek Kutylowski tells me during our recent conversation. He cautions against early enthusiasm: "We are very easily impressed by technology at the beginning, but then you have to go into the details to understand what its limitations are."The Challenge of DefinitionWhile the Oxford English Dictionary offers a broad definition of AGI as 'a machine that can exhibit behavior as intelligent as, or more intelligent than, a human being,' industry leaders remain divided. OpenAI's Sam Altman describes AGI as the "equivalent of a median human that you could hire as a coworker," while pioneering AI researcher Fei-Fei Li, head of the Stanford Human-Centered AI Institute and CEO of World Labs admits I frankly dont even know what AGI means.In the realm of language translation, Kutylowski provides a practical perspective: "If we want AI translations that match human translations, we need to have an AI that can understand the world as well as a human can - which could be the definition of AGI." DeepL is one of the most powerful translation tools available today. Yet, despite DeepL's advances bringing them "an epsilon away" from human-level translation in some contexts, true understanding remains elusive.The Timeline DebateThe disagreement over AGI timelines largely stems back to the problem above - how we define AGI. If we focus purely on cognitive abilities, ambitious predictions from Sam Altman, Elon Musk, and Anthropic's CEO Dario Amodei of reaching AGI within 2-4 years seem more plausible. But if we include physical capabilities - even with the skills of robots designed by the likes of Boston Dynamics, we're far behind.However, Kutylowski raises a more fundamental concern: "If AGI can replace the brain of anyone, then we have to rethink our society."The Human FactorMy own experience with AI predictions has taught me humility. In a 2018 book, I forecasted that self-driving vehicles would dominate roads within 5-15 years. While the technology progresses, I underestimated human resistance to change. Though autonomous vehicles demonstrate better safety records (except during dawn and dusk), the social implications of displacing 5% of the workforce would have been severe, so perhaps slow adoption has been a saving grace.Kutylowski frames this challenge philosophically: "Our current value system is very much centered around what have we accomplished, what are we doing, what is our contribution to society." As AI capabilities expand, he asks, "How do we feel fulfilled when that falls away?" This is a key question in Universal Basic Income discussions.While cognitive AGI might emerge within four years, its integration into society will likely - and thankfully - proceed more gradually. The institutional inertia I once criticized, particularly in large companies, may actually serve a vital purpose: allowing society to adapt at a manageable pace.The reality is that AGI's arrival won't be marked by a profit milestone or a single technological breakthrough, but by our collective readiness to reimagine human potential in an AI-enhanced world. Perhaps that's a better measure of progress than any balance sheet.