In this blog post, Dr Milly Zimeta, the ODI’s Head of Policy, looks at the Sewell report on structural biases and on digital technology, focusing on how we can all innovate in data policy and data practice for transformative results.

In July 2020 the UK government created the Commission on Race and Ethnic Disparities – chaired by Tony Sewell – to review inequality in the UK, focusing on areas such as poverty, education, employment, health and the criminal justice system. On 31 March 2021 the Commission published their report (‘the Sewell report’), which has been widely – and in our view, credibly – criticised for its methods as well as its conclusions. 

We believe that responsibilities and opportunities for addressing race and ethnic disparities in the UK lie across a broad range of sectors and domains in our society and economy; and that there are ways for people and organisations to do this effectively and without being constrained by the limitations of the Sewell report. 

The Sewell report contains 24 recommendations. One of these focuses on data: ‘Recommendation 23: Use data in a responsible and informed way.’ But what does it mean for data to be used in a responsible and informed way? In this mini-series we look at aspects of this question, and the implications for addressing race and ethnic disparities.

As part of this mini-series Dr Jeni Tennison OBE, the ODI’s Vice President, has also written about this topic, focusing on how data is categorised, the government’s focus on its own data, and the digital divide across the UK’s public sector. 

Ghosts and mutants

An area of controversy for the Sewell report is its analysis of structural racism in the UK, which the report defines as ‘a legacy of historic racist or discriminatory processes, policies, attitudes or behaviours that continue to shape organisations and societies today’. The report is sceptical about this, stating: ‘Once we interrogated the data we did find some evidence of biases, but often it was a perception that the wider society could not be trusted. For some groups historic experience of racism still haunts the present and there was a reluctance to acknowledge that the UK had become open and fairer.’

But, as William Faulkner wrote in Requiem for a Nun (1951): ‘The past is never dead. It’s not even past.’ Structural racism has present consequences in new digital technologies derived from large datasets collected over time. In the short section of the Sewell report on artificial intelligence (AI), the report acknowledges that the primary source of bias in automated systems is through historic data because of its role in training algorithms. So perhaps these are haunted algorithms; or perhaps they are just mutant algorithms.

Even without the contribution of ghosts and mutants, past racism has present consequences in how data policies are interpreted or implemented

Even without the contribution of ghosts and mutants, past racism has present consequences in how data policies are interpreted or implemented. In my working life based in the UK, I’m often the only ethnic minority person in my professional environment. I once reported an incident of racist harassment which was duly investigated by an independent body, with the perpetrator receiving a warning and anti-discrimination training. Some time later, a conversation about it with the senior manager on whose watch it had happened left me with the impression that he didn’t recognise the significance of the intervention that had taken place. I became concerned that he had not filed the necessary paperwork to ensure the incident was documented for the organisation’s – and sector’s – longer-term record of its potential structural biases, and its accountability and improvement. 

When I tried to double check the organisation’s record, I encountered two striking difficulties around data policy. A response to a Freedom of Information (FOI) request about reported incidents of racist harassment would be aggregated, generalised to a very high level, or even declined in order to avoid risks of identifying victims – which is inevitable in situations like mine, where there is only one ethnic minority person. And a subject access request about myself and incidents involving me would be redacted – because data about my harassment is also data about my harasser. I had no way to know if the organisation’s ‘memory’ of the incident aligned with my own, or even existed. It was almost like a strange mutation, or a haunting.

Policy for everyone

In the era of ‘big data’, these are well-established challenges for data policy. We have to balance transparency and privacy. We have to recognise the rights and interests of individuals along with the rights and interests of groups and society as a whole. We have to manage the ways in which traditional governance concepts such as ‘ownership’ and ‘consent’ don’t always apply in a linear way – or at all – in data lifecycles. 

The specific challenges and opportunities around the risks and limitations of data granularity apply beyond ethnic minorities. They apply to other communities that are statistical minorities in the population – such as disabled people – and to ‘minoritised communities’ that are a statistical majority in the population but who have less-than-proportionate power or capital because of structural biases – such as women. 

Rather than avoiding these challenges, we should be thinking about how to navigate them in ways that are applicable for a wider range of present situations and future developments. They are opportunities for innovation.

You don’t have to be a government commissioner, or a chief data officer of a large organisation, for these challenges in data policy to apply to you and your work, or for you to be empowered to do something about it

You don’t have to be a government commissioner, or a chief data officer of a large organisation, for these challenges in data policy to apply to you and your work, or for you to be empowered to do something about it. Policy tools range from ‘hard’ to ‘soft’. ‘Hard’ policy tools such as legislation and taxation are exclusive to government, but these can be difficult and costly for government to develop and implement, and so once set they often remain relatively stable. ‘Soft’ policy tools such as communities of practice, codes of ethics, and toolkits for good practice are available to anyone in any sector, profession, or organisation to develop, implement, and adapt – in ways that can be quick, forward-looking, and innovative.

A good approach to ‘soft’ interventions in data policy would be to think about the aims of the policy, and the power structures in which that policy is going to be enacted, to develop an approach to your data practice that is equitable and that makes a meaningful and sustainable contribution to your strategic goals. 

For example, our Open Cities project and toolkits explore how local authorities can work in a more open and collaborative way with their stakeholders and citizens. They include a case study from North Lanarkshire Council which was receiving FOI requests that were incurring a high time and cost burden on council staff. Believing that ‘every FOI request is a service failure’, North Lanarkshire Council set out to reduce the number of FOI requests by increasing transparency and proactively providing information about the local area. This had the additional benefit of building trust and constructive engagement across sectors, domains, and communities in the area.

Dual citizenship

The opportunity – and need – to innovate in data policy and data practice has two other interesting nonlinear aspects. One of these is around availability of data on intersectionality. 

Each of us has multiple facets to our identity: ethnicity is one facet; others might be age, gender, socioeconomic background, geography, disability (or its absence), sexual orientation, educational attainment, religion (or its absence), interpersonal networks, and so on. These facets might change over time, as might their relative importance or relative advantage and disadvantage. 

As Susan Sontag wrote in the introduction to her book, Illness As Metaphor (1978): ‘Illness is the night side of life, a more onerous citizenship. Everyone who is born holds dual citizenship, in the kingdom of the well and in the kingdom of the sick. Although we all prefer to use the good passport, sooner or later each of us is obliged, at least for a spell, to identify ourselves as citizens of that other place.’ Sontag explores how misguided and often harmful rhetoric of ‘war against disease’ is often really about our inability to confront mortality in general; and in this way she highlights the complexity of our identities and begins to offer us an alternative way to think and talk about illnesses such as cancer – not as something unnatural to us.

Intersectionality is something that has to be considered in all data projects about people, communities, or populations – because intersectionality applies to all of us, all the time

This fact of our composite and dynamic identities means that intersectionality is something that has to be considered in all data projects about people, communities, or populations – because intersectionality applies to all of us, all the time. It is something we must learn to navigate, rather than be at war with; and to navigate equitably and in all aspects of data literacy and data collection. And it means we will need to innovate to develop data policies and data practices that do not shy away from granularity and fluidity in the richness of our lives.

Minority report

A second interesting nonlinear aspect of the opportunity and need to innovate in data policy and practice is around the vision for digital technologies that are made possible by data availability. 

The Sewell report, in its short overview of automated systems such as AI, concludes with: ‘before dismissing any system, it should be compared with the alternative. An automated system may be imperfect, but a human system may be worse.’ But there is no exploration of how technologies such as AI could be used to tackle the very disparities that the report sets out to understand and address. This is a missed opportunity to consider not just how to prevent the possible harmful impacts of these new technologies, but to think about how to leverage their potential for social good. For example, in autumn 2020 we hosted a public talk by the BFI about  its work using computational techniques to genderise 250,000 credited film contributors. This work has allowed it to identify and analyse the career trajectories of women across British filmography since the start of cinema - correcting the historical record, improving the visibility of a minoritised community and the industry’s transparency and accountability, and helping to identify patterns of the conditions for success. These computational techniques could also contribute to improving diversity and inclusion more broadly in the sector.

And the opportunity could be stronger, and more exciting, than just redressing past or present harms: digital technologies are, after all, also disruptive technologies

And the opportunity could be stronger, and more exciting, than just redressing past or present harms: digital technologies are, after all, also disruptive technologies. In the field of international development, there has been growing interest in the potential for digital technologies to enable low- and middle-income countries (LMICs) to ‘leapfrog’ traditional stages of linear economic development. There is even potential for LMICs to overtake high-income countries in aspects of their technology adaptation and innovation, because they are not encumbered in the same way by traditional national infrastructures and systems developed before this digital age. 

These sorts of rapid and lateral developments with digital technologies are made possible through data availability for innovation, facilitating collaborations and lowering the barriers to participation for inventors, entrepreneurs, start-ups, SMEs, and civil society organisations. And so when we innovate in our data policies and data practices, to address structural injustices in our society such as race and ethnic disparities, we should do so with this positive transformative potential in mind.

So perhaps, when it comes to data policy and practice, the ‘nonlinear wonderful’ is the most promising response to the ‘unproven weird’.