DSIT permanent secretary says more AI transparency needed
www.computerweekly.com
The Department for Science, Innovation and Technologys (DSITs) most senior civil servant has said government must go further in improving transparency around the roll-out of artificial intelligence (AI) systems throughout the public sector.Asked by members of the Public Accounts Committee (PAC) on 30 January 2025 how government can improve trust in the public sectors increasing use of AI and algorithmic decision-making tools, DSIT permanent secretary Sarah Munby said theres more to do on transparency, which can help build up trust in how automated tools are being used.Munby said the public sector needs to be clear where, for example, AI has been used in letters or e-mails from government to citizens (something she said is reflected in government guidance), as well as focus on how it communicates to people across the country on AI-related issues.She added that if government fails to be demonstrably trustworthy, it will ultimately become a blocker of progress for the further roll-out of AI tools.A major aspect of the governments efforts here is the Algorithmic Transparency Recording Standard (ATRS), which was designed in collaboration between DSIT and the Central Digital and Data Office (CCDO), and rolled out in September 2022 to improve public sector transparency and provide more information around the algorithmic tools they are using.While DSIT announced in February 2024 that it intended to make the ATRS a mandatory requirement for all government departments during 2024 (as well as expand its use to the broader public sector over time), the standard has been criticised over the lack of engagement with it so far, despite government having hundreds of AI-related contracts.In March 2024, the National Audit Office (NAO) highlighted how only eight of 32 organisations responding to its AI deployment survey said they were always or usually compliant with the standard. At that point, just seven records were contained in the ATRS.Read more about artificial intelligenceMajor obstacles facing Labours AI opportunity action plan: Skills, data held in legacy tech and a lack of leadership are among the areas discussed during a recent Public Accounts Committee session.First international AI safety report published: A global cohort of nearly 100 artificial intelligence experts publish first international AI safety report ahead of the third AI summit, outlining an array of challenges posed by the technology that will be used to inform upcoming discussions.Elon Musk distances himself from Trumps Stargate AI mission: Just a few days into the Donald Trump presidency and there appears to be a disagreement brewing around funding of OpenAI and the Stargate Project.As it stands, there are currently33 records contained in the ATRS, 10 of which were voluntarily published on 28 January by local authorities not covered by the central department mandate.Commenting on the ATRS, Munby admitted we need to get more out, noting that another 20 or so are due to be published in February, with lots more to follow throughout the year.Its absolutely our view that they should all be out and published, she said. It takes a bit of time to get them up and get them running. It hasnt been mandatory for that long, but theres been a significant acceleration in pace recently, and we expect that to continue.Munby also highlighted that getting the law right is an important component of building trust. Theres quite an extensive set of provisions in the Data [Use and Access] Bill which are about making sure that where automated decision-making takes place, there are really good forms of regress, including the ability to challenge [decisions], she said.While the Labour government adopted almost every recommendation of the recently published AI action plan which proposed increasing both trust and adoption in the technology through building up the UKs AI assurance ecosystem none of the recommendations mentioned transparency requirements.In written evidence to the PAC published on 30 January, a group of academics including Jo Bates, a professor of data and society at the University of Sheffield, and Helen Kennedy, a professor of digital society at the University of Sheffield said it was key to have socially meaningful transparency around the use of public sector AI and algorithms.Socially meaningful transparency focuses on enhancing public understanding of AI systems for informed use and democratic engagement in datafied societies, they said. This is important given the widely evidenced risks of AI, eg. algorithmic bias and discrimination, that publics are increasingly aware about. Socially meaningful transparency prioritises the needs and interests of members of the public over those of AI system developers.They added that government should work to reduce information asymmetries around AI through the mandated registration of systems, as well as by fostering discussion and decision-making between government and non-commercial third parties, including members of the public about what AI-related information is released publicly.Further written evidence from Michael Wooldridge, a professor of computer science at the University of Oxford, also highlighted the need to increase public trust in government AI, where transparency can play an essential role.Some people are excited about AI; but many more are worried about it, he said. They are worried about their jobs, about their privacy, and they may even be worried (wrongly) about existential threat.However well-motivated the use of AI in government is, I think it is likely that the government use of AI will therefore be met by scepticism (at best), and hostility and anger at worst, said Wooldridge. These fears however misplaced need to be taken seriously, and transparency is absolutely essential to build trust.
0 Reacties
·0 aandelen
·50 Views