WWW.COMPUTERWORLD.COM
US wants to nix the EU AI Act’s code of practice, leaving enterprises to develop their own risk standards
The European Union (EU) AI Act may seem like a done deal, but stakeholders are still drafting the code of practice that will lay out rules for general-purpose AI (GPAI) models, including those with systemic risk. Now, though, as that drafting process approaches its deadline, US President Donald Trump is reportedly pressuring European regulators to scrap the rulebook. The US administration and other critics claim that it stifles innovation, is burdensome, and extends the bounds of the AI law, essentially creating new, unnecessary rules. The US government’s Mission to the EU recently reached out to the European Commission and several European governments to oppose its adoption in its current form, Bloomberg reports. “Big tech, and now government officials, argue that the draft AI rulebook layers on extra obligations, including third party model testing and full training data disclosure, that go beyond what is in the legally binding AI Act’s text, and furthermore, would be very challenging to implement at scale,” explained Thomas Randall, director of AI market research at Info-Tech Research Group. Onus is shifting from vendor to enterprise On its web page describing the initiative, the European Commission said, “the code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.” The code is voluntary, but the goal is to help providers prepare to satisfy the EU AI Act’s regulations around transparency, copyright, and risk mitigation. It is being drafted by a diverse group of general-purpose AI model providers, industry organizations, copyright holders, civil society representatives, members of academia, and independent experts, overseen by the European AI Office. The deadline for its completion is the end of April. The final version is set to be presented to EU representatives for approval in May, and will go into effect in August, one year after the AI Act came into force. It will have teeth; Randall pointed out that non-compliance could draw fines of up to 7% of global revenue, or heavier scrutiny by regulators, once it takes effect. But whether or not Brussels, the de facto capital of the EU, relaxes or enforces the current draft, the weight of ‘responsible AI’ is already shifting from vendors to the customer organizations deploying the technology, he noted. “Any organization conducting business in Europe needs to have its own AI risk playbooks, including privacy impact checks, provenance logs, or red-team testing, to avoid contractual, regulatory, and reputational damages,” Randall advised. He added that if Brussels did water down its AI code, it wouldn’t just be handing companies a free pass, “it would be handing over the steering wheel.” Clear, well-defined rules can at least mark where the guardrails sit, he noted. Strip those out, and every firm, from a garage startup to a global enterprise, will have to chart its own course on privacy, copyright, and model safety. While some will race ahead, others will likely have to tap the brakes because the liability would “sit squarely on their desks.” “Either way, CIOs need to treat responsible AI controls as core infrastructure, not a side project,” said Randall. A lighter touch regulatory landscape If other countries were to follow the current US administration’s approach to AI legislation, the result would likely be a lighter touch regulatory landscape with reduced federal oversight, noted Bill Wong, AI research fellow at Info-Tech Research Group. He pointed out that in January, the US administration issued Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” Right after that, the National Institute of Standards and Technology (NIST) updated its guidance for scientists working with the US Artificial Intelligence Safety Institute (AISI). Further, references to “AI safety,” “responsible AI,” and “AI fairness” were removed; instead, a new emphasis was placed on “reducing ideological bias to enable human flourishing and economic competitiveness.” Wong said: “In effect, the updated guidance appears to encourage partners to align with the executive order’s deregulatory stance.”
0 Comments 0 Shares 11 Views