Earlier this week, the Government published its highly anticipated response to the AI White Paper consultation (you can see the ODI's submission here). The Government received over 400 responses from civil society, industry, regulators and others.
The Government's response included announcing £10m of additional funding to equip regulators to tackle the sector-specific risks and opportunities of AI. £80m was announced to launch nine AI research hubs across the UK and to support thinking on how the Government could mitigate the risks to general-purpose AI models.
The Government's response makes clear its intention from the White Paper to pursue a principles-based approach, where regulators in specific sectors regulate AI in context to maintain an agile approach to AI regulation.
The five principles are:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The ODI's consultation response included calls for a 6th principle centred around Data. Our data-centric AI work demonstrates the importance and need to recognise the importance of data and AI regulation. We hope that the government's approach will include a stronger recognition of the importance of data in AI as the technology, and necessary legislation, continues to progress. We are disappointed that in the Data Protection and Digital Information Bill - which will regulate AI where it makes use of personal data - the government is weakening transparency, rights, and protections, and fails to recognise the importance of maintaining public trust in the use of data and AI if it is to unlock innovative new research, services and uses.
The government's response set out a requirement for the UK's key regulators to publish their strategic approach for AI regulation by the end of April this year, as well as an assessment of the challenges and risks in their individual sectors - and how they plan to mitigate and address them. The government’s response acknowledges that there may be regulatory gaps, which they are reviewing - the government should complete this review, and outline concrete next steps for filling them, as a matter of urgency.
We were pleased to see the government acknowledge the complex and important questions around liability and accountability when it comes to using AI - something the ODI has been calling for as we know that technological harms have a disproportionate impact on marginalised groups. The government's response also set out its intention to explore an array of binding measures and regulation for frontier AI, to sit alongside existing voluntary actions, including transparency, testing, and responsibilities for developers.
Earlier this week, we wrote about the risk of disinformation that arises from AI, so it was positive to see the government's response include deepfakes and disinformation in a range of short-term risks of AI. In addition, they highlighted concerns around the use of AI in the workplace, in public services, challenges around bias and discrimination, and safety concerns. The government intends to tackle these risks through increased government capabilities to track AI-related crimes and risks, to utilise technology and international dialogue to mitigate risks, and to call for further evidence on AI-related risks in the information space. The government also said it would make the use of the Algorithmic Transparency Recording Standard, which details algorithms being used in government, mandatory for departments (and in time, for the wider public sector). We await further information on how this will be made mandatory and enforced - the government appears to have abandoned earlier plans to put this into law - and when it would be rolled out more widely.
While we are pleased to see some extra funding made available for AI regulation, we are concerned that these funds are significantly less than required to upskill existing regulators to be able to effectively manage and mitigate risks from AI. We would urge the government to consider a more data-centric approach to AI, and to place even more emphasis on transparency, explainability, and accountability - so that AI can work in the best interest of all, and so that the risk of harm to marginalised and minority groups are minimised. As the Data Protection and Digital Information Bill continues to work its way through Parliament, we will continue to advocate for the government to recognise the impact weakening the data ecosystem will have on AI development - and would urge them to keep the requirement for Data Protection Impact Assessments, and Data Protection Officers.
We look forward to supporting the Government and all policymakers in the coming months and years to strengthen the UK, and international, data ecosystem and to promote and advance trust in data and AI.