![](https://www.computerworld.com/wp-content/uploads/2025/02/3823521-0-05203500-1739418303-shutterstock_1198926103.jpg?quality=50&strip=all)
EU pulls back for the moment on privacy and genAI liability compliance regulations
www.computerworld.com
When the EU on Tuesday said it was not, at this time, moving ahead with critical legislation involving privacy and genAI liability issues, it honestly reported that members couldnt agree. But the reasons why they couldnt agree get much more complicated.The EU decisions involved two seemingly unrelated pieces of legislation: One dealing with privacy efforts, often called the cookie law, and the other dealing with AI liability.The EU decisions are in the annexes to the Commissions work programme for 2025, in Annex IV, items 29 and 32. For the AI liability section (on adapting non-contractual civil liability rules to artificial intelligence), the EU found no foreseeable agreement. The Commission will assess whether another proposal should be tabled or another type of approach should be chosen.For the privacy/cookie item (concerning the respect for private life and the protection of personal data in electronic communications), the EU said, No foreseeable agreement no agreement is expected from the colegislators. Furthermore, the proposal is outdated in view of some recent legislation in both the technological and the legislative landscape.Various EU specialists said those explanations were correct, but the reasons behind the decisions from those member countries were more complex.Andrew Gamino-Cheong, CTO at AI company Trustible, said different countries had different, and incompatible, positions.The EU member states have started to split on their own attitudes related to AI. On one extreme is France, which is trying to be pro-innovation and [French President Emmanuel] Macron used the [AI summit] this past week to emphasize that, Gamino-Cheong said. Others, including Germany, are very skeptical of AI still and were pushing for these regulations. If France and Germany are at odds, as the economic heavyweights in the EU, nothing will get done.But Gamino-Cheong, along with many others, said there is a fear that the global AI arms race may hurt countries that impose too many compliance requirements.The EU is seen as being too aggressive, overregulating and the EU takes a 2-sentence description and writes 14.5 pages about it and then contradicts itself in multiple areas, Gamino-Cheong said.Ian Tyler-Clarke, an executive counselor at the Info-Tech Research Group, said he was not happy that the two proposed bills did not go forward because he fears how those moves will influence other countries.Beyond the EU, this decision could have broader geopolitical consequences. The EU has long been a global leader in setting regulatory precedents, particularly with GDPR, which influenced privacy laws worldwide. Without new AI liability rules, other regions may hesitate to introduce their own regulations, leading to a fragmented global approach, Tyler-Clarke said. Conversely, this could trigger a regulatory race to the bottom, where jurisdictions with the least restrictions attract AI development at the cost of oversight and accountability.A very different perspective comes from Enza Iannopollo, a Forrester principal analyst based in London.Asked about the failure to move forward on the privacy bill,Iannopollo said, Thank God that they have withdrawn that one. There are more pressing priorities to address.She said the privacy effort suffered from the rapid advances in web controls, including some changes made by Google. Regulators were not convinced that they would improve things, Iannopollo said.Regarding the AI liability rules, Iannopollo said that she expects to see those come back in a revised form. I dont think this is a final call. They are just buying time.The critical factor is that another, much larger piece of legislation, called simply the EU AI Act, is just about to kick in, and regulators wanted to see how that enforcement went before expanding it. They want to see how these other pieces of the framework are going to work. There are a lot of moving parts so (delaying) is wise.Another analyst, Anshel Sag, VP and principal analyst with Moor Insights & Strategy, said that EU members are very concerned with how they are perceived globally.The real challenge is that applying regulations too early, without the industry being mature enough, risks hurting European companies and European competitiveness, which I believe is a major factor in why these regulations have been paused for now, Sag said. Especially when you consider the current rate of change within AI, theres just a chance that they could spend a long time on this regulation and by the time its out, its already well out of date. They will have to act fast, though, when the time is right.Added Vincent Schmalbach, an independent AI engineer in Munich, The most interesting part is how this represents a major shift in EU thinking. It went from being the worlds strictest tech regulator to acknowledging they need to focus on not falling further behind in the AI race.Michael Isbitski, principal application security architect for genAI at ADT, the $19 billion HR and payroll enterprise, and also a former Gartner analyst, sees the two proposed EU legislative efforts as potentially having had a massive impact on data strategies.The proposed AI rule, he said, involved the retention of AI-generated data logs. Everywhere there is some kind of AI transaction, you need to retain those logs, for every query, anywhere, Isbitski said. Think about what needs to be done to secure your requirements and controls systems, along with your cloud security. Logging seems simple, but if you look at a complete AI interaction, there are an awful lot of interconnects.However, Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions, said the pausing of these two EU potential rules will likely have no significant impact on enterprises.This means you can continue to do everything you were doing before. There will be no new constraints on top of anything you were doing, Villanustre said.But the broader issue of genAI liability absolutely needs to be addressed because the current mechanisms are woefully inadequate, he said.That is because the very nature of genAI, especially in its stochastic and probabilistic attributes, makes liability attribution virtually impossible.Lets say something bad happens, for example, with an LLM deployment where a company loses billions of dollars or there is a loss of life.There are typically going to be three possible groups to blame: the model-maker, which creates the algorithm and trains the model; the enterprise, which finetunes the models and adapts it to that enterprises needs; and the user base, which would be either employees, partners, or customers who pose the queries to the model.Overwhelmingly, when a problem happens, it will be because of the interactions of efforts by two or three of those groups. Without the new legislation being proposed by the EU, the only means of determining liability will be via legal contracts.But genAI is a different kind of system. It can be asked the identical question five times and offer five different answers. That being the case, if its developers cannot accurately predict what it will do in different situations, Villanustre wondered what chance attorneys have at anticipating all problems.That is a challenge: determining who has the responsibility, Villanustre said. This legislation was meant to define the liability outside of contracts.
0 التعليقات
·0 المشاركات
·43 مشاهدة