TIME.COM
U.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security Concerns
U.S. Commerce Secretary Gina Raimondo at the inaugural convening of the International Network of AI Safety Institutes in San Francisco on Nov. 20, 2024.Jeff ChiuAPBy Tharin Pillay / San FranciscoNovember 21, 2024 1:00 AM ESTAI is a technology like no other in human history, U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isnt the smart thing to do.Raimondos remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.Raimondo suggested participants keep two principles in mind: We cant release models that are going to endanger people, she said. Second, lets make sure AI is serving people, not the other way around.The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.In a joint statement, the members of the International Network of AI Safety Instituteswhich includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singaporelaid out their mission: to be a forum that brings together technical expertise from around the world, to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community, and to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.In the lead-up to the convening, the U.S. AISI, which serves as the networks inaugural chair, also announced a new government taskforce focused on the technologys national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology, with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Community Party does not get to write the rules of the road. Earlier Wednesday, Chinese lab Deepseek announced a new reasoning model thought to be the first to rival OpenAIs own reasoning model, o1, which the company says is designed to spend more time thinking before it responds.On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability, which the commission defined as systems as good as or better than human capabilities across all cognitive domains that would surpass the sharpest human minds at every task.Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesdays event, Anthropic CEO Dario Amodeiwho believes AGI-like systems could arrive as soon as 2026cited loss of control risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting we also need to be really careful about how we do it.Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI modelthe upgraded version of Anthropics Claude 3.5 Sonnet. The evaluation focused on assessing the models biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be routinely circumvented, which they noted is consistent with prior research on the vulnerability of other AI systems safeguards.The San Francisco convening set out three priority topics that stand to urgently benefit from international collaboration: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, to accelerate the design and implementation of frontier AI safety frameworks. And in February, France will host its AI Action Summit, following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. It has the potential to replace the human mind, she said.More Must-Reads from TIMEWhy Trumps Message Worked on Latino MenWhat Trumps Win Could Mean for HousingThe 100 Must-Read Books of 2024Sleep Doctors Share the 1 Tip Thats Changed Their LivesColumn: Lets Bring Back RomanceWhat Its Like to Have Long COVID As a KidFXs Say NothingIs the Must-Watch Political Thriller of 2024Merle Bombardieri Is Helping People Make the Baby DecisionContact us at letters@time.com
0 Комментарии
0 Поделились
52 Просмотры