Lord Holmes warns of increasingly urgent need to regulate AI
www.computerweekly.com
The UK government must urgently legislate on artificial intelligence (AI) given the clearly negative impacts it is already having on many peoples day-to-day lives, warns Conservative peer Lord Holmes in a report.In November 2023, Holmes introduced an AI private members bill to Parliament in lieu of any formal proposals from government at the time, which focused on establishing measures for adaptive regulation, inclusive design, ethical standards, transparency, accountability, education and international cooperation.Holmes said in the report that while his bill was intended to proactively engage the public and fellow parliamentarians with the ideas and legislative steps needed to ensure AI is shaped positively for the benefit of all, the technology remains largely under-regulated, which is allowing a range of harms to flourish unabated.Whether its discrimination and bias in AI algorithms, disinformation from synthetic imagery, scams using voice mimicking technology, copyright theft or unethical chatbot responses, we are already facing a host of problems from existing AI, he said.Speaking during a roundtable on the launch of the report, Holmes added while it was urgent to regulate AI when he initially proposed his private members bill back in 2023, I believe it remains even more pressing today.Highlighting eight archetypal examples of people living at the sharp end of unregulated AI in the UK, Holmes report published on 26 February 2025 shows how the technology is already negatively impacting peoples lives due to the lack of effective protections in place.For each of the examples, the report lays out the problem and how his proposed AI bill could address the issues at hand.In the case of benefit claimants, for example, he noted how the Department for Work and Pension (DWP) has consistently failed to inform the public about the algorithms it is deploying to make decisions about peoples lives, and flagged that automated systems have wrongly led to thousands of indefinite benefit suspensions or fraud investigations.To alleviate this, Holmes said clause two of his bill would set the principles of the previous Conservative governments AI whitepaper on a statutory footing, including measures around transparency, explainability, accountability, contestability and redress, as well as a duty not to discriminate.He also highlighted a separate AI private members bill introduced in September 2024 by Liberal Democrat peer Lord Clement-Jones, which more narrowly aims to establish a clear mandatory framework for the responsible use of algorithmic and automated decision-making systems in the public sector.For the jobseeker, Holmes said while AI is being increasingly deployed in recruitment processes, there are no specific laws currently regulating the use of the technology in employment decisions.He added this has led to people being unfairly exempted from roles due to training data being heavily influenced by years of male-dominated hiring patterns, and creates further issues around the over-collection of personal data to inform the systems and a general lack of transparency around models.Again highlighting clause two of his bill, Holmes said further clauses establishing a horizontally focused AI authority which would undertake a gap analysis of existing regulatory responsibility and ensure alignment across different sectoral regulators and AI responsible officers would also strengthen protections for jobseekers subject to AI.Other archetypal examples highlighted by Holmes include the teacher, the teenager, the scammed, the creative, the voter, and the transplant patient all of whom he said would benefit from a number of other clauses in his private members bill.These include clauses on meaningful, long-term public engagement around the opportunities and risks of AI, as well as transparency around the use of third party data and intellectual property (IP) in training sets, which must be obtained by informed consent.Speaking during the report roundtable, participants including representatives from civil society groups, trade unions and research bodies, as well as other Lords highlighted a number of key considerations for regulating AI.This includes leveraging the procurement powers of governments in ways that reflect the values trying to be achieved, which they argued could act as a form of soft power over tech firms, and ensuring people feel like they have a say over the development and deployment of the technology throughout the public sector and their workplaces.The participants further warned that if AI systems are adopted throughout the public sector without effective regulation in place, it will irrevocably erode peoples trust in the state.Hannah Perry, head of research for digital policy at think tank Demos, for example, said AI could contribute to the further decimation of trust were seeing in society at the moment due to its tendency to act as a centralising force that risks removing and disempowering the public from decision-making.She added it was therefore crucial to have some form of public engagement, and that creating a deliberative platform where ordinary people are able to influence digital rights and principles should be embedded in any UK AI regulation.Commenting on the need for participatory regulatory approaches, Mary Towers, an employment rights officer at the Trades Union Congress (TUC) specialising in the use of AI and tech at work, said AI is already having worrying consequences for workers across a wide range of sectors, including work intensification, reduced agency and autonomy at work due to algorithmic management practices, negative mental health impacts, and unfair or discriminatory outcomes.Flagging TUC polling on worker attitudes towards AI, Towers added that some 70% of workers believe it is only right that there is a statutory right to consultation for employers to consult with workers before implementing new technology at work.She added: Clearly, we believe there should be legislation. It should be context-specific. But I also want to highlight that regulation isnt just about legislation. Consultation, participation, collective bargaining, the social partnerships approach those are all certainly forms of regulation.Andrew Strait, associate director at the Ada Lovelace Institute (ALI), added that while surveys find most people do not rank AI as a priority issue alone, this changes when they are asked about its use in sensitive public sector contexts, such as health and social care or benefit allocation decisions.Suddenly people really care, he said. Theyre very concerned, very nervous, very uncomfortable with the pace of adoption, the lack of guardrails, the sense that things are moving too quickly and in a way where human autonomy, expert decision-making are being pushed out of the way for speed and efficiency.That then begs the question of, what is it that people want? They want regulation. They want rules to feel comfortable about it. They want to feel like they do when they go on airplane, where theres been rigorous safety testing, norms and standards.Strait further highlighted that, in the ALIs experience of engaging with private companies, the single greatest barrier to increased AI adoption is the lack of reliability in the technology something that standards and regulation would also give them more certainty on.The roundtable participants also vehemently argued against creating a binary between innovation and growth on the one hand, and safety and regulation on the other.Keith Rosser, director at Reed Screening and a member of the Better Hiring Institutes advisory board, said, for example, that because the recruitment sector is already awash with AI with both jobseekers and employers using the tech to make and sift through job applications respectively weve got all the risks, but only some of the opportunities.He added that without regulation, this situation will persist: Businesses want to be supported by governments, they want to know where the guardrails are For both sides of this use case the jobseeker and the hiring company no regulation means theres huge uncertainty.Roger Taylor, first chair of the UKs Centre for Data Ethics and Innovation, added that the use of AI in government is likely the most significant area where there is no regulation: The tension at the moment is this fear that growth and regulation fight against each other, and growth is the most important thing, followed by making public services more efficient and more effective before the next election comes along.Its quite understandable why those would be the priorities. The question is, is it really true that regulatory measures are counterproductive? ... We do need to pass a law that puts in place some kind of legal regulatory mechanism, not just because we want the assurance and were worried about things going on, but because it is an enormous opportunity for this country to demonstrate that we can lead in this area.Read more about AI regulationTUC publishes legislative proposal to protect workers from AI: The Trades Union Congress (TUC) has published a ready-to-go law for regulating artificial intelligence (AI) in the workplace, setting out a range of new legal rights and protections to manage the adverse effects of automated decision-making on workers.AI Action Summit: Global leaders decry AI red tape: The focus of previous AI summits on the safety of artificial intelligence systems has been replaced by concerns there is too much regulatory red tape, which politicians and AI developers have argued is holding back innovation.UN body urges globally inclusive and distributed AI governance: A United Nations body set up to investigate the international governance of AI says the nature of how the technology currently operates requires a global approach to regulation that prioritises equity and inclusion.
0 Комментарии ·0 Поделились ·67 Просмотры