DeepMind is holding back release of AI research to give Google an edge
arstechnica.com
clamming up DeepMind is holding back release of AI research to give Google an edge A tougher vetting process and more bureaucracy make it harder to publish. Melissa Heikkil and Stephen Morris, Financial Times Apr 1, 2025 9:28 am | 9 Credit: Jacob Porczyki/Nurphoto Credit: Jacob Porczyki/Nurphoto Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreGoogles artificial intelligence arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry.The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind.Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Googles own Gemini AI model in a negative light compared with others.The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI.Meanwhile, huge breakthroughs by Google researcherssuch as its 2017 transformers paper that provided the architecture behind large language modelsplayed a central role in creating todays boom in generative AI.Since then, DeepMind has become a central part of its parent companys drive to cash in on the cutting-edge technology, as investors expressed concern that the Big Tech group had ceded its early lead to the likes of ChatGPT maker OpenAI.I cannot imagine us putting out the transformer papers for general use now, said one current researcher.Among the changes in the companys publication policies is a six-month embargo before strategic papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.A person close to DeepMind said the changes were to benefit researchers who had become frustrated spending time on work that would not be approved for strategic or competitive reasons. They added that the company still publishes hundreds of papers each year and is among the largest contributors to major AI conferences.Concern that Google was falling behind in the AI race contributed to the merger of London-based DeepMind and California-based Brain AI units in 2023. Since then, it has been faster to roll out a wide array of AI-infused products.The company has shifted to one that cares more about product and less about getting research results out for the general public good, said one former DeepMind research scientist. Its not what I signed up for.DeepMind said it had always been committed to advancing AI research and we are instituting updates to our policies that preserve the ability for our teams to publish and contribute to the broader research ecosystem.While the company had a publication review process in place before DeepMinds merger with Brain, the system has become more bureaucratic, according to those with knowledge of the changes.Former staffers suggested the new processes had stifled the release of commercially sensitive research to avoid the leaking of potential innovations. One said publishing papers on generative AI was almost impossible.In one incident, DeepMind stopped the publication of research that showed Googles Gemini language model is not as capable or is less safe than rivals, especially OpenAIs GPT-4, according to one current employee.However, the employee added it had also blocked a paper that revealed vulnerabilities in OpenAIs ChatGPT, over concerns the release seemed like a hostile tit-for-tat.A person close to DeepMind said it did not block papers that discuss security vulnerabilities, adding that it routinely publishes such work under a responsible disclosure policy, in which researchers must give companies the chance to fix any flaws before making them public.But the clampdown has unsettled some staffers, where success has long been measured through appearing in top-tier scientific journals. People with knowledge of the matter said the new review processes had contributed to some departures.If you cant publish, its a career killer if youre a researcher, said a former researcher.Some ex-staff added that projects focused on improving its Gemini suite of AI-infused products were increasingly prioritized in the internal battle for access to data sets and computing power.In the past few years, Google has produced a range of AI-powered products that have impressed the markets. This includes improving its AI-generated summaries that appear above search results, to unveiling an Astra AI agent that can answer real-time queries across video, audio, and text.The companys share price has increased by as much as a third over the past year, though those gains pared back in recent weeks as concern over US tariffs hit tech stocks.In recent years, Hassabis has balanced the desire of Googles leaders to commercialize its breakthroughs with his life mission of trying to make artificial general intelligenceAI systems with abilities that can match or surpass humans.Anything that gets in the way of that he will remove, said one current employee. He tells people this is a company, not a university campus; if you want to work at a place like that, then leave.Additional reporting by George Hammond. 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.Melissa Heikkil and Stephen Morris, Financial TimesMelissa Heikkil and Stephen Morris, Financial Times 9 Comments
0 Commentaires ·0 Parts ·46 Vue