The Open Data Institute (ODI)’s Public Policy team is undertaking an ambitious international project, called ‘Experimentalism and the Fourth Industrial Revolution’. We are exploring how data policymakers and data practitioners can work in more innovative and experimental ways to adapt to, and leverage, the fast-moving societal and economic challenges and opportunities around new data availability and associated digital technologies.
The project runs in three parallel workstreams named after sci fi writers. This workstream is named after Ursula Le Guin and focuses on marginalised communities in North America and Europe as data and digital pioneers.
This is part 1, which focuses on drivers and needs around innovation and experimentation in data policy and practice.
- Ursula Le Guin part 2 – Le Guin and data subversions
- Ursula Le Guin part 3 – Le Guin and data questions
Leading from the margins
In Ursula Le Guin’s 1969 novel ‘The Left Hand of Darkness’, she describes an alternative society where all people’s genders are fluid all the time, and so there are no structural social, political, and economic disadvantages or inequalities around gender.
Practical examples of reconceiving our world to be more equitable can be found in the perspectives of communities currently facing structural disadvantages. For instance, the social model of disability argues that what makes a medical impairment a disability is the social barriers around it, and removing and reducing these barriers in society and economy would create a healthier and more inclusive environment for everyone.
Similarly, queer theory has given us a critical lens on notions of partnership and family, articulating subtle and important aspects of love, trust, and respect that benefit all relationships. Diverse identities in our society offer us diverse critical and creative lenses and perspectives; these approaches might also enrich data policy and practice.
On 27 September 2021, the ODI in partnership with the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge and the Centre for Responsible AI at NYU convened an international and cross-sector online roundtable for a candid and constructive exploration of experimentation in data policy and practice by structurally under-represented communities in North America and Europe as transnational emergent forces reimagining the social contract in the Fourth Industrial Revolution. We’re sharing some of the insights from the meeting here to open up the learnings and broaden the discussion as we prepare a more substantive report later this autumn.
Roundtable provocations
Provocation 1: Le Guin’s data identities - Dr Mahlet ('Milly') Zimeta, Head of Public Policy, ODI
The Open Data Institute · Provocation 1 – Dr Mahlet (Milly) Zimeta – Le Guin's left hand of data
Some key questions:
- What are the ways in which under-represented or structurally disadvantaged communities have innovated in data policy and practice?
- What are the ways in which under-represented or structurally disadvantaged communities have enriched data policy and practice for others?
- What are the challenges or blockers confronting under-represented or structurally disadvantaged communities in data policy and practice which, if removed, could improve outcomes or experiences for everyone?
Provocation 2: Experimentalism and equity - Dr Jeni Tennison OBE, Vice President and Chief Strategy Adviser, ODI
Some key questions:
- How do we take advantage of the critical outsider perspective to find opportunities to push those boundaries in data policy and practice for the better?
- What data policies need to be in place to make the most the opportunities for transnational cooperation among minoritised communities for better inclusion and data equity?
- How might the conditions for trust change when developing data policy around under-represented and marginalised groups?
Provocation 3: Algorithmic bias bounty - Dr Rumman Chowdhury, Director of Machine Learning Ethics, Twitter
Some key questions:
- What are the distinctions between experimenting on a social group or community vs collaborating with them? How can big tech companies do more of the latter?
- What might be achieved with bottom-up data / AI standards and governance that can’t be achieved with top-down standards and governance?
- What might bottom-up experimentation in data / AI policy or practice look like?
Provocation 4: Faith and AI - Dr Adrian Weller, Programme Director for AI, The Alan Turing Institute; Senior Research Fellow in Machine Learning, University of Cambridge; Programme Lead, Trust and Society, Leverhulme Centre for the Future of Intelligence
Some key questions:
- What are the scenarios where there is already public acceptance for machines to make decisions that could be life-or-death decisions?
- How might ideas from religious traditions about human identity or purpose enrich our perspectives on AI?
- How might our AI norms have developed if they had been led by the Dalai Lama?
Provocation 5: AI, Responsibly - Dr Julia Stoyanovich, Associate Professor, Department of Computer Science and Engineering at the Tandon School of Engineering, and the Center for Data Science; co-Director, NYU Center for Responsible AI
- View Dr Julia Stoyanovich's accompanying slides here (opens in a new tab)
Some key questions:
- Should we be trying to model complex or hard-to-measure concepts like success or risk with algorithms?
- Is it ever possible for a dataset to be completely unbiased? Is that always a desirable goal? Can bias in datasets be used to achieve good?
- How scientific can data policy or AI policy be? How scientific should it be?
Provocation 6: Community data - Dr Aaron Franks, Senior Advisor, First Nations Information Governance Center (Canada)
Some key questions:
- How can social contracts around data practices be reimagined if those data practices have been imposed by force?
- What might be the impacts of “opening data” on communities whose agency has been denied in other domains?
- By whom and how should community data be defined, and how does it differ from other ways of categorising data and data sources?
Get involved
We’ve created a short summary note with a distillation of the high-level themes and observations that emerged in discussion. It’s available here as a ‘living document’, and we welcome and encourage reader comments on it, as part of a community of practice, and to inform how the project develops. Read the provocations and summary notes from our Isaac Asimov workstream in this project, and read those from our Octavia Butler workstream here.
The summary note also includes a Resource Guide that we hope you find useful, and that you can contribute to. If you would like to explore any of these ideas and opportunities further with any of the event partners, or in collaboration with us together, we'd be keen to hear from you. Some immediate practical opportunities might be around ODI Research Fellowships, engaging in the We Are AI course created by the Center for Responsible AI, creating a bottom-up data governance community inspired by Twitter’s Algorithmic Bias Bug Bounty Program, or exploring Faith and AI with the Leverhulme Centre for the Future of Intelligence. We'd also be open to co-developing case studies, projects, or activities: and if there are projects or resources that you'd find useful but that don't seem to exist, do let us know in this document - we or others in this community of practice might be able to develop them.
There’s more about the project here where you can also sign up to the project mailing list for updates and opportunities, contact the team on [email protected], or look out for our news on Twitter: @ODIHQ.