WWW.INFORMATIONWEEK.COM
What Could Less Regulation Mean for AI?
President-elect Trump has been vocal about plans to repeal the AI executive order signed by President Biden. A second Trump administration could mean a lot of change for oversight in the AI space, but what exactly that change will look like remains uncertain.I think the question is then what incoming President Trump puts in its place, says Doug Calidas, senior vice president of government affairs for Americans for Responsible Innovation (ARI), a nonprofit focused on policy advocacy for emerging technologies. The second question is the extent to which the actions the Biden administration and the federal agencies have already taken pursuant to the Biden executive order. What happens to those?InformationWeek spoke to Calidas and three other leaders tuned into the AI sector to cast an eye to the future and consider what a hands-off approach to regulation could mean for the companies in this booming technology space.A Move to Deregulation?Experts anticipate a more relaxed approach to AI regulation from the Trump administration.Obviously, one of Trumps biggest supporters is Elon Musk, who owns an AI company. And so that coupled with the statement that Trump is interested in pulling back the AI executive order suggest that we're heading into a space of deregulation, says Betsy Cooper, founding director at Aspen Tech Policy Hub, a policy incubator focused on tech policy entrepreneurs.Related:Billionaire Musk, along with entrepreneur Vivek Ramaswamy, is set to lead Trumps Department of Government Efficiency (DOGE), which is expected to lead the charge on significantly cutting back on regulation. While conflict-of-interest questions swirl around his appointment, it seems likely that Musks voice will be heard in this administration.He famously came out in support of California SB 1047, which would require testing and reporting for the cutting-edge systems and impose liability for truly catastrophic events, and I think he's going to push for that at the federal level, says Calidas. That's not to take away from his view that he wants to cut regulations generally.While we can look to Trump and Musks comments to get an idea of what this administrations approach to AI regulation could be, but there are mixed messages to decipher.Andrew Ferguson, Trumps selection to lead the US Federal Trade Commission (FTC), raises questions. He aims to regulate big tech, while remaining hands-off when it comes to AI, Reuters reports.Of course, big tech is AI tech these days. So, Google, Amazon all these companies are working on AI as a key element of their business, Cooper points out. So, I think now we're seeing mixed messages. On the one hand, moving towards deregulation of AI but if you're regulating big tech then it's not entirely clear which way this is going to go.Related:More Innovation?Innovation and the ability to compete in the AI space are two big factors in the argument for less regulation. But repealing the AI executive order alone is unlikely to be a major catalyst for innovation.The idea that by even if some of those requirements were to go away you would unleash innovation, I don't think really makes any sense at all. There's really very little regulation to be cut in the AI space, says Calidas.If the Trump administration does take that hands-off approach, opting not to introduce AI regulation, companies may move faster when it comes to developing and releasing products.Ultimately, mid-market to large enterprises, their innovation is being chilled if they feel like there's maybe undefined regulatory risk or a very large regulatory burden that's looming, says Casey Bleeker, CEO and cofounder of SurePath AI, a GenAI security firm.Does more innovation mean more power to compete with other countries, like China?Related:Bleeker argues regulation is not the biggest influence. If the actual political objective was to be competitive with China nothing's more important than having access to silicon and GPU resources for that. It's probably not the regulatory framework, he says.Giving the US a lead in the global AI market could also be a question of research and resources. Most research institutions do not have the resources of large, commercial entities, which can use those resources to attract more talent.[If] we're trying to increase our competitiveness and velocity and innovation putting funding behind research institutions and education institutions and open-source projects, that's actually another way to advocate or accelerate, says Bleeker.Safety Concerns?Safety has been one of the biggest reasons that supporters of AI regulation cite. If the Trump administration chooses not to address AI safety at a federal level, what could we expect?You may see companies making decisions to release products more quickly if AI safety is deprioritized, says Cooper.That doesnt necessarily mean AI companies can ignore safety completely. Existing consumer protections address some issues, such as discrimination.You're not allowed to use discriminatory aspects when you make consumer impacting decisions. That doesn't change if it's a manual process or if it's AI or if you've intentionally done it or by accident, says Bleeker. [There] are all still civil liabilities and criminal liabilities that are in the existing frameworks.Beyond regulatory compliance, companies developing, selling, and using AI tools have their reputations at stake. If their products or use of AI harms customers, they stand to lose business.In some cases, reputation may not be as big of a concern. A lot of smaller developers who don't have a reputation to protect probably won't care as much and will release models that may well be based on biased data and have outcomes that are undesirable, says Calidas.It is unclear what the new administration could mean for the AI Safety Institute, a part of the National Institute of Standards and Technology (NIST), but Cooper considers it a key player to watch. Hopefully that institute will continue to be able to do important work on AI safety and continue business as usual, she says.The potential for biased data, discriminatory outcomes, and consumer privacy violations are chief among the potential current harms of AI models. But there is also much discussion of speculative harm relating to artificial general intelligence (AGI). Will any regulation be put in place to address those concerns in the near future?The answer to that question is unclear, but there is an argument to be made that these potential harms should be addressed at a policy level.People have different views about how likely they are ... but they're certainly well within the mainstream of things that we should be thinking about and crafting policy to consider, Calidas argues.State and International Regulations?Even if the Trump administration opts for less regulation, companies will still have to contend with state and international regulations. Several states have already passed legislation addressing AI and other bills are up for consideration.When you look at big states like California that can have huge implications, says Cooper.International regulation, such as the EU AI Act, has bearing on large companies that conduct business around the world. But it does not negate the importance of legislation being passed in the US.When the US Congress considers action, it's still a very hotly contested because US law very much matters for US companies even if the EU is doing some different, says Calidas.State-level regulations are likely to tackle a broad range of issues relating to AI, including energy use.I've spent my time talking to legislators from Virginia, from Tennessee, from Louisiana, from Alaska, Colorado, and beyond and what's been really clear to me is that in every conversation about AI, there is also a conversation happening around energy, Aya Saed, director of AI policy and strategy at Scope3, a company focused on supply chain emissions data, tells InformationWeek.AI models require a massive amount of energy to train. The question of energy use and sustainability is a big one in the AI space, particularly when it comes to remaining competitive.There's the framing of energy and sustainability actually as a national security imperative, says Saed.As more states tackle AI issues and pass legislation, complaints of a regulatory patchwork are likely to increase. Whether that leads to a more cohesive regulatory framework on the federal level remains to be seen.The Outlook for AI CompaniesThe first 100 days of the new administration could shed more light on what to expect in the realm of AI regulation or lack thereof.Do they pass any executive orders on this topic? If so, what do they look like? What do the new appointees take on? How especially does the antitrust division of both the FTC and the Department of Justice approach these questions? asks Cooper. Those would be some of the things I'd be watching.Calidas notes that this term will not be Trumps first time taking action relating to AI. The American AI Initiative executive order of 2019 addressed several issues, including research investment, computing and data resources, and technical standards.By and large, that order was preserved by the Biden administration. And we think that that's a starting point for considering what the Trump administration may do, says Calidas.
0 Comments
0 Shares
22 Views