This - 30th October to 3rd November 2023 - has been a significant week in data and AI. At the ODI, we’ve been keen participants in the conversation and partners in the AI Fringe event and in the week, we have seen:
- The publication of the US Executive Order on Safe Secure and Trustworthy AI, with America promising to lead the way on “seizing the promise and managing the risks of artificial intelligence”, followed by an address at the US embassy in London by Vice-President Kamala Harris.
- An open letter from a range of UK organisations and individuals - including the ODI - calling on government and tech leaders to recognise the clear and present risks to ordinary people today rather than focusing solely on ‘frontier AI’.
- The unveiling of the Bletchley Declaration, issued on the first day of the AI Safety Summit, claiming to be “a world-first agreement…establishing a shared understanding of the opportunities and risks posed by frontier AI.”
It’s been such a significant five days that Politico has dubbed it “the week when AI and geopolitics collided”. Yet, in the wider world, Tim Bond from IPSOS, reflecting on their latest data, reminded us not to “forget that 'NORMAL' people haven't necessarily even used some of these AI technologies yet.” In this work, IPSOS found that 64% of the UK public haven’t used generative AI tools and work and only 3% use them ‘very often’ in this context.
Low levels of adoption might be reflective of caution, lack of awareness, or concerns about safety - and more. Our founder and Executive Chair, Sir Nigel Shadbolt, writing this week in Management Today, called this a “learning moment”. He said that whilst “many people will not be writing extended amounts of code, …people should be able to understand, in outline, the technologies they work with and that feature in their lives”.
Throughout the week, and on balance, there has been limited consideration of the role of data. Yet just as there is a spectrum of AI, there is a spectrum of data, and at the ODI, we are interested in where the two coalesce. Data isn’t mentioned in the Bletchley Declaration but features strongly in the Executive Order, focusing on rights and privacy.
Next week, on 7th November, we’ll stage our own ODI Summit 2023, with the provocation that “without data, there would be no AI”. We’ll explore how technology can work with data across the spectrum, from closed to shared to open data, and we’ll broaden the discussion to focus on the impacts - and benefits - of data-enabled technology across many parts of the global community.
So, what key takeaways will we carry from this week’s events into next? Here are four of the most significant ones.
What about the data?
For the ODI, the limited acknowledgement - in the Bletchley Declaration - of data and data infrastructure is a concern. Datasets, which, as Nigel Shadbolt has said to the government, are the ‘feedstock’ of AI systems, exist in ecosystems in which all elements of governance are important to attend to. This is more important now than ever:
- LLMs may need more high-quality training data, as this MIT Technology Review piece suggests, to avoid model collapse. Nigel Shadbolt also discussed this during our Tuesday session at the AI Fringe.
- More broadly, an open and trustworthy data ecosystem is essential as we emerge into a new era of technologies - a matter which Mozilla has suggested should be a global priority.
Valuable international collaboration
It’s positive that the Bletchley Declaration, signed by 28 countries, signals “shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration”. In addition:
- The emphasis on investing in governance mechanisms is a win and came through the topics discussed at the AI Fringe.
- We would have liked to see more alignment on data and were disappointed - again - not to see this recognised in the Bletchley agreement. As we called for in our response to the government’s AI White Paper, data should be addressed as an integral element of AI.
- Overall, the statement itself is ‘action light’ despite an earlier announcement establishing a joint UK AI Safety Institute.
Consider clear and present risks
Many commented, including Kamala Harris in her speech to the US Embassy, that the emphasis of the Summit and the policy developments surrounding it on ‘frontier AI’ must continue to be complemented by urgent and extensive work on present and short-term risks and harms that arise from current and future applications of AI. These include, but are not limited to, those present in the use of algorithms - often without human oversight or right to redress - to make decisions about people’s eligibility for jobs, financial support or educational attainment; surveillance and predictive justice and policing and risks to the livelihoods of creatives.
The systems on which these algorithms are trained require vast amounts of data. As we have said in our Five Year strategy, for data to work for everyone, those collecting and using it must be highly alert to inequalities, biases and power asymmetries. The US Executive Order explicitly states a commitment to advance “equity and human rights, stand(s) up for consumers and workers. promote(s) innovation and competition.” We would like to see similar commitments and action in the UK, and around the world.
Bring in a broad range of voices
There has been a lot of talk - in the Bletchley agreement, in Kamala Harris’ speech, and by different commentators, including across civil society and on social media, about ‘who should be in the room’ for the discussion about advancing technology. Kamala Harris’ speech emphasised the importance of civil society for AI accountability, and she announced a new coalition of ten philanthropics to support public interest efforts that mitigate artificial intelligence’s (AI) harms and promote responsible use and innovation. Nigel Shadbolt spoke on this subject during the ODI’s fringe session, saying: “Technology that affects many people requires involvement from many people.” Harriet Harman also referenced the importance of gender equality in her keynote at the AI Fringe, stating: “We must have no men-only rooms in planning for the development of the AI workforce or indeed anything else to do with the development of and regulation of AI.”
In the weeks and months ahead, we’d like to see a diversity of voices joining the conversation - and the decision-making - about the role that technology can play in society and how data should be accessed, used and shared.
The Autumn of AI
Of course, it remains to be seen whether this week - as a moment in time - is remembered as a landmark in international relations around the development of AI. Or if it is quickly overtaken by the pace of change in world events - digital or otherwise. In the meantime, we continue our mission to create an open, trustworthy data ecosystem, confident that it is the essential bedrock on which all new technological advancements should be built.