![](https://api.time.com/wp-content/uploads/2025/02/uk-artificial-intelligence.jpg?quality=85&w=2400)
Exclusive: The British Public Wants Stricter AI Rules Than Its Government Does
time.com
British Prime Minister Keir Starmer gives a speech on harnessing AI to drive economic growth and "revolutionize" public services in the U.K.Henry Nicholls - WPA Pool/Getty ImagesBy Billy PerrigoFebruary 6, 2025 4:00 AM ESTEven as Silicon Valley races to build more powerful artificial intelligence models, public opinion on the other side of the Atlantic remains decidedly skeptical of the influence of tech CEOs when it comes to regulating the sector, with the vast majority of Britons worried about the safety of new AI systems. The concerns, highlighted in a new poll shared exclusively with TIME, come as world leaders and tech bossesfrom U.S. Vice President JD Vance, Frances Emmanuel Macron and Indias Narendra Modi to OpenAI chief Sam Altman and Googles Sundar Pichaiprepare to gather in Paris next week to discuss the rapid pace of developments in AI. The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of smarter-than-human AI models. Just 9%, meanwhile, said they trust tech CEOs to act in the public interest when discussing AI regulation. The survey was conducted by the British pollster YouGov on behalf of Control AI, a non-profit focused on AI risks.The results reflect growing public anxieties about the development of AI systems that could match or even outdo humans at most tasks. Such technology does not currently exist, but creating it is the express goal of major AI companies such as OpenAI, Google, Anthropic, and Meta, the owner of Facebook and Instagram. In fact, several tech CEOs expect such systems to become a reality in a matter of years, if not sooner. It is against this backdrop that 75% of the Britons polled told YouGov that laws should explicitly prohibit the development of AI systems that can escape their environments. More than half (63%) agreed with the idea of prohibiting the creation of AI systems that can make themselves smarter or more powerful.The findings of the British poll mirror the results of recent U.S. surveys, and point to a growing gap between public opinion and regulatory action when it comes to advanced AI. Even the European Unions AI Act widely seen as the worlds most comprehensive AI legislation and which began to come into force this month stops short of directly addressing many of the possible risks posed by AI systems that meet or surpass human abilities.In Britain, where the YouGov survey of 2,344 adults was conducted over Jan. 16-17, there remains no comprehensive regulatory framework for AI. While the ruling Labour Party had pledged to introduce new AI rules ahead of the last general election in 2024, since coming to power it has dragged its feet by repeatedly delaying the introduction of an AI bill as it grapples with the challenge of restoring growth to its struggling economy. In January, for example, British Prime Minister Keir Starmer announced that AI would be mainlined into the veins of the nation to boost growtha clear shift away from talk of regulation. It seems like theyre sidelining their promises at the moment, for the shiny attraction of growth, says Andrea Miotti, the executive director of Control AI. But the thing is, the British public is very clear about what they want. They want these promises to be met.A New Push for New LawsThe polling was accompanied by a statement, signed by 16 British lawmakers from both major political parties, calling on the government to introduce new AI laws targeted specifically at superintelligent AI systems, or those that could become far smarter than humans.Specialised AIs such as those advancing science and medicine boost growth, innovation, and public services. Superintelligent AI systems would [by contrast] compromise national and global security, the statement reads. The U.K. can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.Miotti, from Control AI, says that the U.K. does not have to sacrifice growth by imposing sweeping regulations such as those contained in the E.U. AI Act. Indeed, many in the industry blame the AI Act and other sweeping E.U. laws for stymying the growth of the European tech sector. Instead, Miotti argues, the U.K. could impose narrow, targeted, surgical AI regulation that only applies to the most powerful models posing what he sees as the biggest risks.What the public wants is systems that help them, not systems that replace them, Miotti says. We should not pursue [superintelligent systems] until we know how to prove that they're safe.The polling data also shows that a large majority (74%) of Brits support a pledge made by the Labour Party ahead of the last election to enshrine the U.K.s AI Safety Institute (AISI) into law, giving it power to act as a regulator. Currently, the AISI an arm of the U.K. government carries out tests on private AI models ahead of their release, but has no authority to compel tech companies to make changes or to rule that models are too dangerous to be releaseMore Must-Reads from TIMETrump and Musk Have All of Washington on EdgeWhy AI Safety Researchers Are Worried About DeepSeekBehind the Scenes of The White Lotus Season ThreeWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsWrite to Billy Perrigo at billy.perrigo@time.com
0 Commentaires
·0 Parts
·60 Vue