Amongst them were initiatives to leverage data and AI to improve public services, including £3.4billion to digitally transform the NHS, including using AI ‘to cut down and potentially cut in half form filling by doctors’, improving the NHS app, an acceleration of the federated data platform and a move to electronic patient records in all hospitals. The Chancellor also announced a Centre for Police Productivity to ‘improve police data quality and enable forces to implement promising technologies’ including facial recognition. In addition, there was £34 million for the Public Sector Fraud Authority to deploy AI to tackle fraud. Local authorities have been asked to produce plans by July 2024 setting out how they will ‘utilise data and technology’ to improve public sector productivity, and the Chancellor stated support for Open Referral UK to streamline administrative processes for councils and drive further adoption of the Open Referral UK data standard as part of a 3-year plan.
The budget also included the announcements of a £7.4 million AI Upskilling Fund pilot for SMEs, as well as ‘two new data pilots to drive high-quality AI in education and improve access to data in adult social care for a total of £3.5 million’. We have previously written about the use of AI in education and await further details from the government on these pilots. In particular, we want to see their intentions to assess and assure if the data used is ‘fit for function’, and to ensure transparency and accountability regarding the safety of systems. We also need to know where data - especially that of children - goes.
As a member of the Smart Data Council, we were encouraged to see a commitment to accelerate the benefits of smart data schemes through targeted funding for consultations and calls for evidence in the energy and transport sectors. Smart Data is the secure sharing of customer data with authorised third-party providers -upon the customer’s request - and has immense potential to improve services, lower prices, and encourage innovation in services. The UK has benefited greatly from Open Banking - a smart data scheme - and smart data initiativesin other sectors have the potential to deliver significant financial benefit to consumers, businesses, and the UK economy.
In the Autumn Statement, the government announced additional funding to support public compute, taking the government's planned investment to over £1.5billion. There was no additional funding in the budget, but there was a commitment to explaining how public compute facilities will be managed. It’s hoped that, as a result, "both researchers and innovative companies are able to secure the computing power they need to develop world-class AI products". In addition, the budget announced that the Alan Turing Institute (ATI) has been granted up to £100 million in funding over the next five years to fund fresh advances in data science and AI. We look forward to hearing further details on how access to public compute will be managed and how the ATI’s funding will be used.
Overall, the budget signalled a commitment to artificial intelligence as a tool for enterprise and public sector reform. For example, the Chancellor announced plans to more than double the size of i.AI - the AI incubator team that helps the government to maximise the benefits of AI across the public sector and civil service. But to fully realise the intended benefits, data and AI regulation must be carefully considered. The government has yet to incorporate in its narrative, and its policy announcements, a strong enough recognition of the importance of data in AI. We will continue to advocate for data standards, an increase in transparency around data use, and a move towards a more open data ecosystem that incorporates a wider range of views.
The significant and extensive plans to incorporate data and AI into public services, particularly in health and justice, should sit alongside plans to build the public’s trust in the application and use of these technologies. As we have previously highlighted, the Data Protection and Digital Information Bill - which will regulate AI where it uses personal data - will weaken transparency, rights, and protections. It is, therefore, likely to negatively impact public trust in data and AI with potentially negative repercussions in the acceptance and use of these tools.
According to a KPMG report, while trust in AI among members of the public is increasing slowly, it is still relatively low at 31%. This trust will be particularly vital for the application of AI in policing, where the public is - understandably - likely to be wary of surveillance and facial recognition on a mass scale. We urge the government to consider the data ecosystem holistically and to work with the public and civic society to build trust. This will be essential to make a success of any digital transformation and any use of AI in public services.
To that end, the government should ensure it hears views from all parts of society, all nations and regions and all demographic groups - especially marginalised and minority groups. This is especially important as the government considers the wider role that LLMs might play in the public sector. Careful consideration must be given to the data on which these systems are trained and how a layer of human decision-making is retained, especially in rights of appeal. We have previously highlighted concerns about biased data leading to biased outcomes, and there are many examples where the use of AI in public services not only incorporates biases but amplifies and entrenches them. Securing the buy-in and participation of citizens, along with cooperation from civic society groups, will be essential in successfully incorporating AI into fair and equitable public services.
Our data-centric AI work demonstrates the need to recognise the importance of data within AI regulation and use. We have recently called for a greater focus on issues around accountability and liability, as we know that technological harms have a disproportionate impact on marginalised groups. Last month, we wrote about the risk of disinformation that arises from AI. We have raised concerns about using AI in the workplace, in public services, challenges around bias and discrimination, and safety concerns.
As the use of AI expands across the public sector, we urge the government to consider a more data-centric approach to AI, and to place even more emphasis on transparency, explainability, and accountability - so that AI can work in the best interests of everyone and the risk of harm to marginalised and minority groups is minimised. As the Data Protection and Digital Information Bill continues to work through Parliament, we will continue to advocate for the essential importance of the data ecosystem in successful AI development and application - across all sectors.