The past few weeks have seen a myriad of events, debates and announcements focused on emerging technologies, especially artificial intelligence. In the UK alone, we’ve had the Oxford AI Generative AI Summit, followed by the AI Fringe, the AI Safety Summit at Bletchley Park, and, just this week, the ODI’s annual Summit.
The Bletchley Park event saw tech and government leaders come together to sign the Bletchley Declaration in the same week the US unveiled its Executive Order on Safe, Secure and Trustworthy AI. To bring both balance and diversity to the discussion, the AI Fringe ran in parallel to the government event.
As an AI Fringe partner, the ODI contributed speakers and hosted panel discussions.
This week - 7th November 2023 - the ODI staged its 10th annual summit. It was an opportunity for us to elevate many voices that hadn’t been heard in the previous week’s events. With the support of our sponsors, we were able to welcome anyone - for free - who wanted to attend. Among our audience and speakers, we had more than 800 people from over 80 countries around the world, including representatives from civil society, the public and private sectors, and academia - ensuring we left no one behind in the discussion
Here are some highlights from the past two weeks in data and AI.
Traditional regulatory approaches won’t work with AI
In her opening remarks at the AI Fringe, Harriet Harman MP -chair of the Fawcett Society - raised the challenges facing regulators, saying it took a decade -2013-2023 - to produce the Online Safety Act, the UK’s attempt to make the internet and social media safer, for children in particular. The AI sector is moving at a dizzying speed - It has taken just 11 months for ChatGPT to surpass 150 million users. This makes it hard, she said, if not impossible, for governments to keep pace, let alone effectively regulate AI’s usage. Technology is fast - legislation is slow.
AI systems should not make decisions; they should help humans make better decisions
Carissa Veliz -Associate Professor at the University of Oxford & author of Privacy Is Power - and Marc Warner -CEO of Faculty, a UK-based AI business - enjoyed a vigorous debate about the AI hype cycle. Still, there was one general point they agreed on; people, not machines, should be the ultimate authority when it comes to decision-making.
We need critical thinking, not hype
Tania Duarte -Co-Founder and CEO of We and AI - offered a sentiment echoed throughout the conference: we need to avoid polarising the issues around AI and deal with them intelligently.
Data skills require lifelong learning
Gina Neff - Executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge - said, “We must be thinking about retraining and lifelong learning; otherwise, we risk people being at a future disadvantage in the workforce.’’
Let’s not forget that AI is built on data!
“Just as we have recognised that broadband and childcare are part of our infrastructure, we have to see datasets as infrastructure,” said Harriet Harman in her keynote at the AI Fringe. She emphasised the importance of ensuring the data used by AIs is of high quality, and free from biases to ensure that algorithms do not perpetuate unfairness. Moreover, a diverse range of voices - especially the voices of women - need to be included in decision-making about how new technologies are engineered and deployed.
This is for everyone
Jatin Aythora - Director of BBC Research & Development - suggested that AI has to be a multi-stakeholder conversation, and we need a broader variety of voices and frameworks to allow us to use AI in the future.
AI ethics deserves greater investment
Carissa Veliz noted that if we had invested just a small portion of what we invest in AI implementation in AI ethics, things would be significantly better. Making that happen now will need a shift in focus from funders and enforcement through regulation. She suggested this should be a regulatory requirement, applying to internal corporate investment and traditional funding channels.
Cherish the laws we have, cherish the laws we make
Our legal systems are much maligned. Regulation and the law maintain equity and order in society. As Dr Veliz suggests, we should both cherish and cultivate the rule of law. It’s the most powerful tool we currently have to keep the lid on AI risks.
Society needs both bread and roses
Isabelle Doran - CEO of the Association of Photographers - pointed out that creatives “are the canaries in the AI coal mine.” And right now, they are working hard to alert the rest of us that something’s wrong. Her call to action? Governments should recognise the importance of the creative sector as we’ll all be financially and spiritually poorer without creatives and their work.
Fly my pretties, fly
As Chloe Smith -Member of Parliament for Norwich North- put it, “We need to get to the same point with AI as we did with air travel”. Professor Shannon Valor - Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute - expanded on this: “Flying used to be unsafe in the 1960s, now it’s safe. That happened because of safety regulations. Innovation in aviation didn’t stop. Now, if someone says they are going to use AI, we have to say good luck to you.” Francine Bennett -Interim Director of the Ada Lovelace Institute - added that “putting on the brakes allows you to go faster” as an analogy for why we need regulation.
How can we find good data to feed AI?
Sir Nigel Shadbolt - AI pioneer, Principal of Jesus College, Oxford and co-founder of the ODI - outlined why we need to know where to go when we need reliable data for AI. We risk a potential model collapse if AI systems are trained on data self-generated by other AI systems. One potential tactic for resolving this challenge could be an open data-based infrastructure for AI, which could help tackle issues like misinformation in addition to supporting models.
Prepare for the unknown, but not the Terminator
Matt Clifford - the UK Prime Minister’s Representative for the AI Safety Summit - reassured us that one of our fears about AI as a future full of “killer robots" is unfounded. Anne-Marie Imafidon - social entrepreneur and computer scientist - posed a critical question: “How do we build skills for a world that we don’t know - this is not the worry; this is the work”.
And so to…the ODI Summit 2023: Data Changes
“Without data, there would be no AI”, says Sir Nigel Shadbolt and our ambition for this year’s ODI summit was to fully reflect this hypothesis in our content and discussions. We saw and heard from speakers from all over the world; from six of seven continents. In the next few weeks, we’ll be publishing all our content online. For now, some of our noteworthy highlights include:
Our founders in conversation with MIT Tech Review’s editor Charlotte Jee
In their annual State of the Data Nation interview, Sir Tim Berners Lee and Sir Nigel Shadbolt considered the question: “Who do AIs work for?”.
Tim made the point that he has, for a long time advocated for AI that “works for me”. At the moment, the technologies work for big companies, not for people. He drew a comparison between AI and doctors and lawyers who have historically worked for individuals and are regulated in their activities, calling for a similar approach with AI. “When they get better at working for me, they’ll be much better personal assistants”, he added.
Nigel, paraphrasing the late Professor Patrick Winston, said that AIs are “smart but not smart like us”, agreeing that the notion of AIs that are loyal to people’s interests would be interesting. Nigel brought us back to the central role of data, saying that for this vision to become a reality, we need to know where the data originates and whether humans or algorithms generate it. From a regulatory point of view, Nigel emphasised the need for the right to appeal to human decision-makers about decisions made by machines.
A keynote from Linda Bonyo - the founder and CEO or Lawyers Hub and Director of Africa Law Tech
In her talk, Linda Bonyo argued that civil society organisations are too frequently being locked out of conversations about technology, saying that this leads to a lack of accountability.
Linda agreed that data is the fundamental building block for representative and inclusive AI. She warned that “Technology pushes us to like new shiny things [but] we must resist the urge to newness.”
Linda’s call to action was to take a step back and build on the existing frameworks that govern data, including international standards around human rights and other factors.
Talks from Data Changemakers around the world
During the summit, we heard talks from eight Data Changemakers worldwide. Alicia Namkila Mbalire the founder of Win, Uganda told us that maggot farming has a data ecosystem that helps to enable communities to benefit from agronomic innovations.
Joon B from Korea and the USA founded Youth for Privacy. He called out the missing voices of young people in the establishment debates around privacy and cyber security. “We all live in a society that runs on data” he said.
Mayuri Dhumal, a data values advocate from India reflected on the importance - but often missing - voice of women in the data discourse.
A unique, filmed video essay - built with AI - by artist, Alan Warburton and commissioned for the summit
Finally, for this roundup, at least, the summit saw the premier of a video essay, commissioned by the ODI’s Data as Culture, exploring the wonder and panic conditions of AI-generated imagery. Alan Warburton, an artist, writer, animator, researcher and PHD student, treated us to The Wizard of AI which met with a huge round of applause from our audience. One person even asked for it to be played on repeat for the rest of the summit! Watch out for the full video coming soon!
We'll publish and share much more content from the ODI Summit soon. To ensure you don't miss any of it, subscribe to the ODI's Week in Data newsletter for all the latest from the ODI and the wonderful world of data!