AI companies upped their federal lobbying spend in 2024 amid regulatory uncertainty
techcrunch.com
Companies spent significantly more lobbying AI issues at the U.S. federal level last year compared to 2023 amid regulatory uncertainty.According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase. Companies like Microsoft supported legislation such as the CREATE AI Act, which would support the benchmarking of AI systems developed in the U.S. Others, including OpenAI, put their weight behind the Advancement and Reliability Act, which would set up a dedicated government center for AI research. Most AI labs that is, companies dedicated almost exclusively to commercializing various kinds of AI tech spent more backing legislative agenda items in 2024 than in 2023, the data shows. OpenAI upped its lobbying expenditures to $1.76 million last year from $260,000 in 2023. Anthropic, OpenAIs close rival, more than doubled its spend from $280,000 in 2023 to $720,000 last year, and enterprise-focused startup Cohere boosted its spending to $230,000 in 2024 from just $70,000 two years ago.Both OpenAI and Anthropic made hires over the last year to coordinate their policymaker outreach. Anthropic brought on its first in-house lobbyist, Department of Justice alum Rachel Appleton, and OpenAI hired political veteran Chris Lehane as its new VP of policy.All told, OpenAI, Anthropic, and Cohere set aside $2.71 million combined for their 2024 federal lobbying initiatives. Thats a tiny figure compared to what the larger tech industry put toward lobbying in the same timeframe ($61.5 million), but more than four times the total that the three AI labs spent in 2023 ($610,000).TechCrunch reached out to OpenAI, Anthropic, and Cohere for comment but did not hear back as of press time.Last year was a tumultuous one in domestic AI policymaking. In the first half alone, Congressional lawmakers considered more than 90 AI-related pieces of legislation, according to the Brennan Center. At the state level, over 700 laws were proposed.Congress made little headway, prompting state lawmakers to forge ahead. Tennesseebecamethe first state to protect voice artists from unauthorizedAI cloning. Coloradoadopteda tiered, risk-based approach to AI policy. And California governor Gavin Newsom signeddozensof AI-related safety bills, a few of which require AI companies to disclose details about their training.However, no state officials were successful in enacting AI regulation as comprehensive as international frameworks like the EUs AI Act.After a protracted battle with special interests, Governor NewsomvetoedbillSB 1047, which would have imposed wide-ranging safety and transparency requirements on AI developers. Texas TRAIGA (Texas Responsible AI Governance Act) bill, which is even broader in scope, may suffer the same fate once it makes its way through the statehouse.Its unclear whether the federal government can make more progress on AI legislation this year versus last, or even whether theres a strong appetite for codification. President Donald Trump has signaled his intention to largely deregulate the industry, clearing what he perceives to be roadblocks to U.S. dominance in AI. During his first day in office, Trumprevokedan executive order by former president Joe Biden that sought to reduce risks AI might pose to consumers, workers, and national security. On Thursday, Trump signed an EO instructing federal agencies to suspend certain Biden-era AI policies and programs, potentially including export rules on AI models.In November, Anthropic called for targeted federal AI regulation within the next 18 months, warning that the window for proactive risk prevention is closing fast. For its part, OpenAIin a recent policy doc calledon the U.S. government to take moresubstantive actionon AI and infrastructure to support the technologys development.
0 Комментарии ·0 Поделились ·14 Просмотры