WWW.COMPUTERWEEKLY.COM
UK government launches AI assurance platform for enterprises
The UK government is launching an artificial intelligence (AI) assurance platform to help businesses across the country identify and mitigate the potential risks and harms posed by the technology, as part of a wider push to bolster the UKs burgeoning AI assurance sector.Noting that 524 firms currently make up the UKs AI assurance market employing more than 12,000 people and worth more than 1bn the government said the platform would help raise awareness of and drive demand for the sector, which it believes could grow sixfold to around 6.5bn by 2035.Launched on 6 November 2024, the platform is intended to act as a one-stop shop for AI assurance by bringing together existing assurance tools, services, frameworks and practices in one place, including the introduction to AI assurance and the portfolio of AI assurance techniques guidance previously created by the Department for Science, Innovation and Technology (DSIT).The platform will also set out clear steps for businesses on how to carry out impact assessments and evaluations, as well as how to review data used in AI systems for bias, so as to generate trust in the technologys day-to-day operations.Digital secretary Peter Kyle said while AI has incredible potential to improve public services, boost productivity and rebuild the economy, to take full advantage, we need to build trust in these systems which are increasingly part of our day-to-day lives.The steps Im announcing today will help to deliver exactly that giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise.While DSIT plans to develop new resources for the platform over time including an AI Essentials toolkit to distil key tenants of relevant governance frameworks and standards so they are comprehensible for industry the department has already launched an open consultation for a new AI assurance self-assessment tool.AI Management Essentials [AIME] will provide a simple, free baseline of organisational good practice, supporting private sector organisations to engage in the development of ethical, robust and responsible AI, said a DSIT report on the future of AI assurance in the UK.The self-assessment tool will be accessible for a broad range of organisations, including SMEs. In the medium term, we are looking to embed this in government procurement policy and frameworks to drive the adoption of assurance techniques and standards in the private sector.It added that insights gathered from the AIME self-assessment tool would also help public sector buyers make better and more informed procurement decisions involving AI, and that the general suite of products on offer through the platform would further help support organisations to begin engaging with AI assurance and establish the building blocks for a more robust ecosystem.The development of safe and responsible AI systems is central to the UK governments vision for the technology, which it sees as an area where the country can carve out a competitive advantage for itself.According to DSITs AI assurance market report, the department will also seek to support this goal by increasing the supply of third-party AI assurance, which it will do in part by developing a roadmap to trust third-party AI assurance with industry; and enabling the interoperability of assurance by developing a terminology tool for responsible AI, which it said would help assurance providers navigate the international governance ecosystem.In further support of the governments vision, the UKs AI Safety Institute (AISI) launched by former prime minister Rishi Sunak in the run-up to his governments AI Safety Summit in November 2023 will be running the Systemic AI Safety Grants programme, which will make up to 200,000 of funding available to researchers working to make the technology safer.On the same day as the assurance platform launch, the AISI announced it had signed a partnership agreement with Singapore, which will see both countries AI safety institutes collaborate to drive forward research and work towards a shared set of policies, standards and guidance.We are committed to realising our vision of AI for the Public Good for Singapore, and the world. The signing of this Memorandum of Cooperation with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI, said Singapores minister for digital development and information, Josephine Teo.Of particular significance is our joint support of the international network of AI Safety Institutes (AISI). Through strengthening the capabilities of our AISI, we seek to enhance AI safety so that our people and businesses can confidently harness AI and benefit from its widespread adoption.Ian Hogarth, chair of the UK AISI, added: An effective approach to AI safety requires global collaboration. Thats why were putting such an emphasis on the international network of AI Safety Institutes, while also strengthening our own research partnerships.Our agreement with Singapore is the first step in a long-term ambition for both our countries to work closely together to advance the science of AI safety, support best practices and norms to promote the safe development and responsible use of AI systems.Read more about AI safetyUK and others sign first binding treaty on AI and human rights: The UK, US and EU have all signed a treaty from the Council of Europe that aims to mitigate the threat AI poses to human rights, democracy and the rule of law, but commentators say it lacks enforcement mechanisms and creates loopholes.UK AISI to open San Francisco branch: News of the AI Safety Institutes expansion to the US follows the first public release of its AI safety testing results.Report highlights disagreement among experts on AI safety: An interim AI safety report coming out of the Bletchley Declaration shows AI experts are not in agreement over some of the biggest risks.
0 Comments
0 Shares
26 Views