From contact tracing to the publishing of the government’s National Data Strategy, 2020 has seen us talking about data and algorithms like never before – and England’s annual A-level results day was no exception

In August, students in England prepared to receive their A-level results, only to find that an algorithm had reduced 40% of the teacher-assigned grades, with reports suggesting that this disproportionately affected those from disadvantaged backgrounds.

The algorithm was designed to standardise the grades submitted by schools in place of students taking exams. However, the decision was met with outrage – from protests by students, to backlash in the media – until the teacher-assigned grades were eventually reinstated not even a week later. The algorithm itself had many facets – our Vice President Jeni Tennison breaks it down here – and critiques ranged from objecting to the data that was fed into the algorithm, through to the fact that an algorithm was even used in the first place.

In this post, we won’t focus on the ‘what went wrong’ – for more on that, you can read our blog post detailing the opinions of experts in data ethics and responsible AI. Here, we instead want to explore what the fallout could mean for the future. Whatever you might think about where the pitfalls of the situation (dubbed the ‘exams fiasco’) lie, one thing has become clear: we have reached a point in which it’s impossible to ignore the presence of algorithms and data in our everyday lives.

Even the word ‘algorithm’ is now part of our everyday vocabulary – whether we’re discussing AI-authored articles, critiquing how content is delivered to us on social media, or campaigning against algorithmically-decided exam grades with signs that read ‘Fuck the Algorithm’, ‘Algorithm? Elitism’ and ‘Your algorithm doesn’t know me'. Even Boris Johnson talked about a ‘mutant algorithm’ derailing exam results in a speech to school pupils.

People campaigning about an algorithm playing a part in determining someone’s future may have once been seen as a far-off, dystopian future. In fact, one A-level student wrote a story about exactly that in 2019, before she too saw her teacher-assigned English mark standardised downwards by the Ofqual algorithm. However, this is not a far-off future – this is now.

In an ODI-commissioned artwork, standup poet Mr Gee puts it best...

For this is the time for decision – the future’s already arrived.

So, the future is here – the role algorithms (and therefore data) play in our lives is increasing, and that doesn’t look to change anytime soon.

To some, this may seem concerning – after the exams fiasco, a survey revealed that much of the public have little trust in the use of algorithms. And contention around algorithms is nothing new – for example Twitter's recent decision to move away from its image-cropping algorithm after it appeared to show racial bias, or the case of the car insurance algorithm that would charge someone named ‘Mohammed’ more than someone named ‘John’.

Data-informed decision making may have its limits, and algorithms can come with other negative effects (for example, one poll suggests the exams fiasco took a toll on young people’s mental health, and powering AI can have a huge carbon footprint – both topics we’ll touch on at this year’s summit). However, it isn’t alone in this – in fact, the same can be said for almost all decision-making processes, human or machine. And, like other decision-making processes, algorithms and data can create real, positive changes too – like reducing staff turnover rates, ensuring quality control or creating personalised user experiences.

Machines aren’t infallible – errors can and will be made (after all, it’s important to note that humans play a role in designing algorithms and the systems in which they’re used – and as the saying goes: garbage in, garbage out). Data can be a powerhouse for transformation. But, to make sure we are maximising success and minimising risk, we need to be able to prevent and detect potential errors, correct these errors when they’re found, and limit the effects on those left waiting for errors to be fixed.

To do this, we need to possess the knowledge and understanding of how to use and apply data correctly – we need to be data literate. By this, we mean much more than learning how to clean or analyse data. It means developing skills to enable people to control and understand how data might be used. It means policymakers and business leaders understanding how to use data as a tool to achieve their goals.

Data literacy can be the make or break. If we aren’t data literate, at best, opportunities for innovation can slip through the net, and at worst, serious mistakes can be made…

This doesn’t just apply to data scientists or key decision-makers. As we’ve laid out, data is driving decisions in many aspects of our lives. This means data skills are required across a workforce. For more on the types of data skills that are required across a business for it to be data literate, see our Data Skills Framework.

To hear more about the importance of data literacy and how to integrate it across a workforce, join us at the ODI Summit 2020 on 10 November, where we will be focusing on the topic of Data Futures. We will explore what the future of data means for many topics, including innovation, policymaking, health and climate. This will include a look at how we develop the right skills for a data-literate future.