TIME.COM
What Donald Trumps Win Means For AI
When Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different. AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, It is possible that we will have superintelligence in a few thousand days, but also noted that it may take longer. Meanwhile, Meta CEO Mark Zuckerberg sees the arrival of these systems as more of a gradual process rather than a single moment.Either way, such advances could have far-reaching implications for national security, the economy, and the global balance of power.Trumps own pronouncements on AI have fluctuated between awe and apprehension. In a JuneLogan Pauls Impaulsive podcast, hedescribed AI as a superpower and called its capabilities alarming. And like many in Washington, he views the technology through the lens of competition with China, which he sees as the primary threat in the race to build advanced AI. Yet even his closest allies are divided on how to govern the technology: Musk has long voiced concerns about AIs existential risks, while J.D. Vance, Trump's Vice President, sees such warnings from industry as a ploy to usher regulations that would entrench the tech incumbents. These divisions among Trump's confidants hint at the competing pressures that will shape AI policy during Trumps second term.Undoing Bidens AI legacyTrumps first major AIJoe Bidens Executive Order on AI. The sweeping order, signed in October 2023, sought to address threats the technology could pose to civil rights, privacy, and national security, while promoting innovation, competition, and the use of AI for public services. Trump promised to repeal the Executive Order on the campaign trail in December 2023, and this position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing radical leftwing ideas on the technologys development.Sections of the Executive Order which focus on racial discrimination or inequality are not as much Trumps style, says Dan Hendrycks, executive and research director of the Center for AI Safety. While experts have criticized any rollback of bias protections, Hendrycks says the Trump Administration may preserve other aspects of Biden's approach. I think there's stuff in [the Executive Order] that's very bipartisan, and then there's some other stuff that's more specifically Democrat-flavored, Hendrycks says.It would not surprise me if a Trump executive order on AI maintained or even expanded on some of the core national security provisions within the Biden Executive Order, building on what the Department of Homeland Security has done for evaluating cybersecurity, biological, and radiological risks associated with AI, says Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.The fate of the U.S. AI Safety Institute (AISI), an institution created last November by the Biden Administration to lead the government's efforts on AI safety, also remains uncertain. In August, the AISI signed agreements with OpenAI and Anthropic to formally collaborate on AI safety research, and on the testing and evaluation of new models. Almost certainly, the AI Safety Institute is viewed as an inhibitor to innovation, which doesn't necessarily align with the rest of what appears to be Trump's tech and AI agenda, says Keegan McBride, a lecturer in AI, government, and policy at the Oxford Internet Institute. But Hammond says that while some fringe voices would move to shutter the institute, most Republicans are supportive of the AISI. They see it as an extension of our leadership in AI.Read more: What Trumps Win Means for CryptoCongress is already working on protecting the AISI. In October, a broad coalition of companies, universities, and civil society groupsincluding OpenAI, Lockheed Martin, Carnegie Mellon University, and the nonprofit Encode Justicesigned a letter calling on key figures in Congress to urgently establish a legislative basis for the AISI. Efforts are underway in both the Senate and the House of Representatives, and both reportedly have pretty wide bipartisan support, says Hamza Chaudhry, U.S. policy specialist at the nonprofit Future of Life Institute.America-first AI and the race against ChinaTrumps previous comments suggest that maintaining the U.S.s lead in AI development will be a key focus for his Administration.We have to be at the forefront, he said on the Impaulsive podcast in June. We have to take the lead over China. Trump also framed environmental concerns as potential obstacles, arguing they could "hold us back" in what he views as the race against China.Trump's AI policy could include rolling back regulations to accelerate infrastructure development, says Dean Ball, a research fellow at George Mason University. "There's the data centers that are going to have to be built. The energy to power those data centers is going to be immense. I think even bigger than that: chip production," he says. We're going to need a lot more chips. While Trumps campaign has at times attacked the CHIPS Act, which provides incentives for chip makers manufacturing in the U.S, leading some analysts to believe that he is unlikely to repeal the act.Read more: What Donald Trumps Win Means for the EconomyChip export restrictions are likely to remain a key lever in U.S. AI policy. Building on measures he initiated during his first termwhich were later expanded by BidenTrump may well strengthen controls that curb China's access to advanced semiconductors. "It's fair to say that the Biden Administration has been pretty tough on China, but I'm sure Trump wants to be seen as tougher," McBride says. It is quite likely that Trumps White House will double down on export controls in an effort to close gaps that have allowed China to access chips, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. The overwhelming majority of people on both sides think that the export controls are important, he says.The rise of open-source AI presents new challenges. China has shown it can leverage U.S. systems, as demonstrated when Chinese researchers reportedly adapted an earlier version of Meta's Llama model for military applications. Thats created a policy divide. "You've got people in the GOP that are really in favor of open-source," Ball says. "And then you have people who are 'China hawks' and really want to forbid open-source at the frontier of AI.""My sense is that because a Trump platform has so much conviction in the importance and value of open-source I'd be surprised to see a movement towards restriction," Singer says.Despite his tough talk, Trump's deal-making impulses could shape his policy towards China. "I think people misunderstand Trump as a China hawk. He doesn't hate China," Hammond says, describing Trump's "transactional" view of international relations. In 2018, Trump lifted restrictions on Chinese technology company ZTE in exchange for a $1.3 billion fine and increased oversight. Singer sees similar possibilities for AI negotiations, particularly if Trump accepts concerns held by many experts about AIs more extreme risks, such as the chance that humanity may lose control over future systems.Trumps coalition is divided over AIDebates over how to govern AI reveal deep divisions within Trump's coalition of supporters. Leading figures, including Vance, favor looser regulations of the technology. Vance has dismissed AI risk as an industry ploy to usher in new regulations that would make it actually harder for new entrants to create the innovation thats going to power the next generation of American growth. Silicon Valley billionaire Peter Thiel, who served on Trumps 2016 transition team, recently cautioned against movements to regulate AI. Speaking at the Cambridge Union in May, he said any government with the authority to govern the technology would have a global totalitarian character. Marc Andreessen, the co-founder of prominent venture capital firm Andreessen Horowitz, gave $2.5 million to a pro-Trump super political action committee, and an additional $844,600 to Trumps campaign and the Republican Party. Yet, a more safety-focused perspective has found other supporters in Trump's orbit. Hammond, who advised on the AI policy committee for Project 2025, a proposed policy agenda led by right-wing think tank the Heritage Foundation, and not officially endorsed by the Trump campaign, says that within the people advising that project, [there was a] very clear focus on artificial general intelligence and catastrophic risks from AI.Musk, who has emerged as a prominent Trump campaign ally through both his donations and his promotion of Trump on his platform X (formerly Twitter), has long been concerned that AI could pose an existential threat to humanity. Recently, Musk said he believes theres a 10% to 20% chance that AI goes bad. In August, Musk posted on X supporting the now-vetoed California AI safety bill that would have put guardrails on AI developers. Hendrycks, whose organization co-sponsored the California bill, and who serves as safety adviser at xAI, Musks AI company, says If Elon is making suggestions on AI stuff, then I expect it to go well. However, theres a lot of basic appointments and groundwork to do, which makes it a little harder to predict, he says.Trump has acknowledged some of the national security risks of AI. In June, he said he feared deepfakes of a U.S. President threatening a nuclear strike could prompt another state to respond, sparking a nuclear war. He also gestured to the idea that an AI system could go rogue and overpower humanity, but took care to distinguish this position from his personal view. However, for Trump, competition with China appears to remain the primary concern.But these priorities arent necessarily at odds and AI safety regulation does not inherently entail ceding ground to China, Hendrycks says. He notes that safeguards against malicious use require minimal investment from developers. You have to hire one person to spend, like, a month or two on engineering, and then you get your jailbreaking safeguards, he says. But with these competing voices shaping Trump's AI agenda, the direction of Trumps AI policy agenda remains uncertain.In terms of which viewpoint President Trump and his team side towards, I think that is an open question, and that's just something we'll have to see, says Chaudhry. Now is a pivotal moment.
0 Комментарии 0 Поделились 43 Просмотры