Guests on stage speaking at the AI Summit Fringe in June 2024

The AI Fringe returned to mark the AI Seoul Summit with a half-day live event at the Knowledge Centre at the British Library in London.

Our Global Head of Policy, Resham Kotecha, spoke about how we can strengthen the data and AI ecosystem, how we can tackle the potential harms from AI while leveraging the benefits, and what we would like to see from next year’s AI Summit in France. Below are some of her key points.

Different parts of the ecosystem bring different perspectives, different strengths, and different levels of expertise. The AI Summit at Bletchley didn’t do enough to incorporate and include a wide range of voices - the onus is still on external partners to convene as a group and to ensure that civil society views are included. The AI Fringe is a fantastic example of how powerful discussion and debate can be - it takes advantage of the huge strength that the UK has in terms of civil society, academia and research as well as business.

Any future government needs to work harder to recognise these strengths, pull people together and leverage every part of the ecosystem - for the good of society and the economy.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/957835744, or update your cookie settings

The UK is one of the leading countries in the world in AI development. The UK has an exciting chance to role model regulation and legislation - a responsibility and opportunity that we should not take lightly. While we supported elements of the now fallen Data Protection and Digital Information Bill (DPDI Bill) (you can read more about our thoughts here), we opposed elements that would have significantly weakened the data protections, for example removing the need for Data Protection Officers and Data Protection Impact Assessments. It is a popular fallacy to suggest that we can remove data protection while simultaneously strengthening the AI ecosystem because they are the same ecosystem. If we weaken data protections, we weaken the AI that is built on top.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/957843398?share=copy, or update your cookie settings

In addition, if we exclude civil society, we weaken the ecosystem by disregarding perspectives and insights into AI harms that are already being felt, particularly by marginalised communities. The DPDI Bill was proudly presented as designed “by industry, for industry” - but where was the recognition of the impact on society? Where was the consideration for the people who might suffer harm?

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958245887?share=copy, or update your cookie settings

When we develop legislation in the UK, we should recognise that lots of countries look to our legislation to piggyback and leapfrog; after all, it is much easier to tweak and adapt existing legislation rather than having to start from scratch. By strengthening our data and AI ecosystem, we can shape the global data and AI ecosystem.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958327578?share=copy, or update your cookie settings

The pressure on resources for civil society is significant. Government funding has been cut for many years. Last year, the ODI averaged a consultation response every 3 weeks for 10 months, in addition to briefing Parliamentarians on the DPDI Bill and the AI White Paper. Like many other civil society organisations, when we want to advocate parliamentarians, we are competing with Big Tech with much bigger teams - we face opportunity costs and the difficulties in choosing where to focus our limited resources.

If we want a stronger data and AI ecosystem, and if we want to represent those who are most at harm from data biases and poorly utilised AI, we have to have strong, trusted and well-funded independent organisations.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/957875701?share=copy, or update your cookie settings

Earlier this year, we launched our Data and AI Policy manifesto. We talked about the key principles that we believe are foundational for building a thriving data and AI ecosystem that benefits people, the environment, and the economy.

One of these key principles is centred around the need for trusted and independent organisations.

The USA has created a $200m fund to support AI for good - the UK should follow suit so that it isn’t just well-funded organisations or big tech that can play a role in advocating for legislative and regulatory change. We should empower civil society organisations by funding them adequately.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958248259?share=copy, or update your cookie settings

We are asking people to trust a system that even developers don’t fully understand - but there are a number of steps we can take to increase transparency, accountability, and trustworthiness in data and AI.

The more open data is, the more interoperable the datasets are, the more the government leads by example in publishing datasets and explaining how it uses AI; the more people will be able to understand the use and impact of data and AI on their lives, and ensure there is accountability and redress, the more able we will be as a country to effectively leverage the benefits of AI.

We talk a lot about upskilling the general public and business leaders on digital literacy, which is vital. But it is Parliamentarians who have to legislate on AI - by upskilling them, the public will feel more confident.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958255011?share=copy, or update your cookie settings

At the ODI, we talk a lot about empowering communities and about participatory data programmes. Our work focuses on how we can empower and engage communities, particularly those likely to be impacted by poor quality datasets which might amplify bias. The more open we are, the more we include academics and researchers to rigorously assess impacts, the more likely we are to safeguard those at risk from AI harms.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958261820?share=copy, or update your cookie settings

We have also launched a Data-Centric AI Project to bring about an AI ecosystem grounded in responsible data practices that mean we can have a reliable ecosystem that builds the foundation for AI.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958316811?share=copy, or update your cookie settings

One of the key challenges we face in developing this trustworthy data and AI ecosystem is legislation and its inability to keep up with the pace of technological advancements. Developing legislation has always been like driving looking in the rearview mirror - but now, technological developments are so fast, it is almost impossible to imagine what legislation should be like to future proof it.

We have been holding discussions on how to make the legislative process fit for purpose. This led to a passionate debate at the launch of our Policy Manifesto where cross-party Parliamentarians discussed potential solutions such as amendable secondary legislation, joint committees, and the potential “beefing up” of existing select committees. We’re still working on this answer - but there was cross-party agreement that we need to find a way to make technological legislation fit for the 21st century!

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958317582?share=copy, or update your cookie settings

In addition to adapting and evolving the legislative process, we need to recognise the gaps in existing laws and regulation. As panellist Gina Neff highlighted, AI is not even defined in UK law. There are many other gaps in current UK law that mean we are unable to fully manage the potential risks that AI presents. As an example, we recognise the potential of AI in education. AI could act like a personal tutor for learners - at the ODI, our learning team have been piloting an AI tutor - but there are significant challenges when it comes to the use of AI in education. Who will decide what is “right” on subjective subjects? Who gets to decide what people view as truth? We need to pay heed to who will determine the narrative going forward.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958322587?share=copy, or update your cookie settings

AI also has exciting potential applications in health - we know that AI can help with diagnosing cancer. However we have also seen headlines about the Princess of Wales’ health data leaking and how that had an impact on making people nervous about their own health data. We have already seen through the 2021 NHS Digital launch of a new service - the General Practice Data for Planning and Research (GPDPR) - that a lack of transparency and explainability can quickly lead to distrust from the public. In the case of the GPDPR, the NHS not engaging properly led to 1.5m people opting out of their health data being used - and the NHS had to pause the service. We need to consider what safeguards would help people feel that their personal and sensitive data is secure and therefore can be used for things like curing/spotting cancer.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958320161?share=copy, or update your cookie settings

As we look forward to next year’s French summit we hope to see the true empowerment of communities and civil society. We would also urge the French government to place Data on the agenda - front and centre. The outcome of the summit could be hugely powerful if it established international consensus on how to ensure datasets are high quality, having standard assurance processes and ensuring that AI is explainable and accountable with appropriate accountability and accessible redress.

If we can get the data foundation right, the positive applications of data and AI are both exciting and endless.

This content is not shown because you have denied third-party cookies. You can view it at https://vimeo.com/958324391?share=copy, or update your cookie settings

You can watch the full panel and the AI Fringe related to the Bletchley Summit.