Data Skills Framework

The 2019 Data Skills Framework

Data is everywhere and affects every decision we make. We want people to use data to make better decisions and be protected from any harmful impacts. To help understand this complex landscape, we have developed a Data Skills Framework to explain what data literacy can mean for you and your team.

Building from a strong foundation the framework presents pathways for those stewarding and creating insights from data as well as those deciding what happens. These two pathways are tied together with a core focus on leading change, informed by data.

New additions to the 2019 framework include working ethically, governing access and standardising data all of which reflect the ODIs vision and theory of change.

Find your way through the framework

In order to build a strong data ecosystem, everyone working with data will need to have an awareness of most of the top half of the framework. This starts with the data strategy and key policies which control both how data in collected, used and shared (all in blue Foundations skills). Expanding from the foundations, everyone working with data should all have an awareness of their ethical and legal responsibilities of using data while also playing a role in ensuring that the data is properly stewarded to maintain high quality and usability (reflected in the two Management and two Stewardship skills that touch the Foundations skills).

From here the left hand side of the framework focuses on building communities, innovation and developing strategy while the right is all about discovering insight and making data intelligent. These skills are tied together in the central pathway that is all about leading change.

How we use the framework at the ODI

All of our educational programmes are informed by the skills framework. When we develop a program we also mix practitioner level skills on one side of the framework with awareness level skills on the other. For example, in a course for data analysts we will ensure that the legal and ethical sides of the data analysis are also present on the course. Likewise, in a course focussed on building communities and designing services we will raise awareness of the importance of the strong stewardship of data and the role of aspects such as standards and the right platforms.

Expanding the global fight against misinformation with technology

The ODI has joined forces with Full Fact and international fact-checkers to use artificial intelligence to dramatically improve and expand the global fight against misinformation, having won the Google AI Impact Challenge

Information that is false – or ‘misinformation’ – and false information spread deliberately to deceive – or ‘disinformation’ – affect millions of people’s lives; their health, safety and ability to participate in society. In recent years, we have seen people die in acts of violence fuelled by rumours spread via social media, and a new outbreak of measles, among many other things.

Tackling misinformation is complex and requires people like journalists and fact-checkers to be able to respond at the speed and scale of the internet. In the past few years, new technological solutions have been proposed by academics and fact-checking organisations such as Full Fact, to help tackle these challenges.

These are promising, but risky. Building technology to automate or speed up responses to misinformation requires a deep understanding of public debate. It also needs to be developed in ways that care to protect free speech and pay close attention to the responsible limits of artificial intelligence (AI) in this field,  and how these can vary across countries, languages, and social and political contexts.

The Open Data Institute is joining forces with reputable fact-checkers and pioneers in automated fact-checking: Full Fact, Chequeado, Africa Check, to advance these efforts. Together, we will work with media outlets, civil society, platforms and public policy makers worldwide to help them understand how AI can help people decide what information to trust, and bring the benefits of automated fact-checking tools to everyone.

The project – which will use AI to dramatically improve and expand the global fight against misinformation – was announced today as one of just 20 international winners of the Google AI Impact Challenge, chosen from more than 2,600 nonprofits, social enterprises and research institutions around the world.

At the ODI, our vision is for a world where data works for everyone. To successfully tackle misinformation, it’s crucial to have access to trustworthy, factual data. If we make data about often misrepresented societal facts – such as crime, immigration or employment statistics – more readily available and more easily usable by automated tools, we can increase the speed, accuracy and scale of fact-checking.

As part of this project, the ODI will build on the excellent work done by the Office for National Statistics in the UK, and their counterparts in other countries, making statistical data available in machine-readable formats and under open licences.

Our goal will be to increase the availability and quality of the data being used to train automated fact-checking systems, but also to ensure that this increased interoperability creates further positive impact.

We will work as openly as possible in this work. If you are involved in similar initiatives, or would like to contribute to the project through workshops or pilots, please get in touch.

How to understand and monitor a city data ecosystem to help make better decisions

How can open-source technologies and open data help foster sustainable mobility, behaviour and city planning to work towards zero emission cities? This was the question posed at the Zero Emission City event, held in Berlin in April 2019. Peter Wells (Director of Public Policy) and Olivier Thereaux (Head of Technology), who took part in the event, share their thoughts on these topics.

Against the backdrop of the complex workings of a city – the multifaceted and interlinked resources, services, businesses, authorities and communities – a new resource has been introduced: data.

Just as the city is an ecosystem, there is a data ecosystem about and around the city

Data is being generated, collected and shared at increasing rates. Our lives are becoming more and more dependent on its use in services. One simple example would be the use of data in fast moving consumer goods (FMCG). Farms, delivery companies and supermarkets use it to put better and cheaper food on our plates. However, many of us still struggle to understand how data is used and how to get access to the data to make better and more timely decisions.

Just as the city is an ecosystem, there is a data ecosystem about and around the city. It is important to build it as openly as possible to create opportunities for better services, better economies and better societies, but also to protect people from the harm that can be caused by misuse of data – whether deliberate or accidental, eg discriminatory profiling or inadvertent data breaches.

Transport, housing, crime prevention, utilities, education – all of this done at larger scale and density in cities than in rural areas, and the cost of failure is high because of the high number of people affected. There will be multiple public-, private- and third-sector organisations delivering different services in different locations across the city.

Researching data for cities

In our research and development programme here at the ODI we have been working on data practices in local authorities, with a focus on new service delivery. In a follow-up project, we will be focusing on cities and city regions.

Rather than talk about ‘smart’ cities we want to talk about ‘open’ cities

In doing this we want to create a different dialogue about data. Rather than talk about ‘smart’ cities we want to talk about ‘open’ cities. We believe that by engaging with decision makers, communities and businesses we can understand where data already exists, where it could be created, and how it could be shared and used. We can also consider how more timely and informed decisions can be made when data is as open as possible.

Data ecosystem mapping

There are a number of ways this can be approached. One of the techniques we think could be useful is data ecosystem mapping. We hope it will help people – whether city policymakers or others – understand what is already happening with data in a city.

This technique draws on ideas from rich picturing, systems thinking and value network analysis to develop an approach for mapping data ecosystems. By creating a visual map that illustrates how data is being accessed, used and shared by a variety of organisations, we have found it is easier to explain the ecosystems that exist around products and services.

By helping us explain data ecosystems, mapping helps people and organisations reach a common understanding about them. And that makes it easier for people to make decisions.

Mapping the data ecosystem in a city is hard, but potentially very rewarding

We’ve previously used data ecosystem mapping in the geospatial sector, looking at bits of the ecosystem like UK flood data and the data used by Niantic’s Pokemon Go, and in the agricultural sector, for example to help people and organisations working to increase agricultural productivity by developing a soil information service.

Mapping the data ecosystem in a city is hard, but potentially very rewarding. Thinking about and considering the many actors is part of the complexity but also part of the opportunity to shine a light on who is doing what with data, when, why, and for what purpose.

Data and zero emissions

As outlined above, the event we attended in Berlin focused on sustainable mobility and moving the city to zero emissions.

There are many cities working to achieve zero emissions, for example Amsterdam, Oxford, the City of London, and the newly built Masdar City in Abu Dhabi.

The plan for Masdar City was originally announced in 2009, yet in 2018 it was reported that only 10% of the expected population were living in the city. Clearly it is hard to sell the idea of moving from consumption to conservation in this scenario of a brand new city. And in existing cities, transitioning to zero emissions is a large change and an activity that takes time. After all, the behaviour of many of the people and organisations in a city needs to change.

Long-term activities like this also require city authorities to think of a range of techniques – from hard interventions like regulation, through to softer interventions like using public procurement to buy from green businesses, or building public awareness campaigns.

Cities need to use data to predict whether these techniques are likely to be effective; to apply and use some of the techniques; and to understand if they are actually effective when they meet the real world, real humans and the complex ecosystem that is a city.

Data observatories

Without diving into all the detail, a technique that’s useful here is a data observatory. There are typically many actors interested in a thing – a data observatory can be a useful structure to pool effort to collect data or information about it. For example, gathering data about emissions so that policymakers can design better interventions or assess if their current ones are having any impact.

In our recent work on the peer-to-peer accommodation sector (a largely urban phenomenon), we gathered insights and recommendations about data observatories, which can be broadened to many other sectors:

  • local government officials should create environments that can support the development of data observatories
  • stakeholders with similar needs should develop data observatories collaboratively
  • local and national government officials should engage with different stakeholders to inform decision-making related to the impacts of peer-to-peer accommodation

The data observatory doesn’t need to be a physical thing or a technology platform for sharing data – that’s ‘smart’ city thinking, jumping straight to a tech answer rather than thinking about what’s needed. Instead it might start as an organisation, or a group of people that agree what data will help tackle a problem, who then gather the data and share it as widely as possible. It might grow from there, but it should start small.

You don’t need all the data

One of the things the data ecosystem mapping and observatory might identify is that some of the necessary data isn’t available. You may need to persuade people to gather and help make data available.

Sometimes that data shouldn’t be available – it might be illegal or unethical to collect and share it. Data Protection Impact Assessments, our Data Ethics Canvas, and talking with people to hear their views should help cities understand those issues.

In other cases it could or should be made available. Our data access map might help you understand the range of models available to do that, and our work on risk and re-identification can provide guidance on releasing data through anonymisation.

But in some cases you might think you need the data, but actually you don’t. Perhaps an extra bit of data will only get you from 94% to 95% accuracy. Is that really worth the cost and risk of collecting more data?

It might seem strange to hear a data organisation, one that believes in a world where data works for everyone, say that you don’t need all the data. But it is true.

There is a balance to be found between increasing the openness of data, and trustworthiness. If we don’t find that balance we risk moving to what we call the data hoarding future – where organisations hold on to data because they think it should only create value for them; and/or the data fearing future – where people and organisations hold on to data because of fear of the harm that it could cause.

Finding that balance is another long-term project, just like reducing emissions in cities. It needs to happen globally, nationally and locally. The biggest challenge for cities that want to build better urban data infrastructure is finding a balance that works for everyone.

Anonymisation and synthetic data: towards trustworthy data

Increasing access to data about people while retaining trust and protecting privacy is one of the most important challenges data practitioners face today. Techniques such as anonymisation and synthetic data may be useful for this, but they remain the playground of a few experts.

In this blogpost Head of Technology Olivier Thereaux showcases the ODI’s recent work to create resources to better manage the risk of re-identification.

Increasing access to data can unlock more value for our societies and economies. This is one of the core principles of the ODI’s mission to create an open, trustworthy data ecosystem. Increased access to data can foster innovation, enable better services and even save lives.

There are however many good reasons why data should not be released openly or even shared. This is the case for sensitive data, a category which includes: the kind of personal data deemed ‘special’ by recent regulation; the kind of corporate or state secrets which could create significant harm if revealed; or even information about the whereabouts of members of endangered species.

Sensitive data, private data, personal data: Venn diagram
Sensitive data, private data, personal data: Venn diagram

Creating value from sensitive data

Tools and techniques exist which enable the creation of value from sensitive data while safeguarding privacy and helping ‘data stewards’ (organisation or person who collects, maintains and shares data) be more responsible and to maintain trust. As part of our UK government-funded research and development programme, we have been looking at two of those techniques: anonymisation, and synthetic data.

Back in November 2018, we were writing about the fact that anonymisation did not seem to be broadly known or understood. While the literature review collated for us by Eticas Research showed us that there is a solid legal and academic understanding of anonymisation, our research on how organisations perceive personal data and the risk of re-identification highlighted that there is a broad range of understanding around the definition of personal data, and how anonymisation can help unlock its value.

Managing the risk of re-identification

Our early research highlighted three specific challenges:

  • Firstly, many data practitioners have a wrong (but understandable) perception that open data never includes personal information.
  • Secondly, most guidance on anonymisation assumes that the people who use the data and what they use it for are well known in advance – something inherently hard to do with open data.
  • Finally, the Anonymisation Case Studies document prepared for us by Eticas demonstrates that there are more famous examples of anonymisation gone wrong than examples of where it has been done right.

We set out to explore these challenges, resulting in our report: Anonymisation and open data: An introduction to managing the risk of re-identification. This short document – written for data practitioners who do not necessarily have prior knowledge of anonymisation – provides evidence of personal and anonymised data in contemporary open data (more common than you would think!) and introduces key concepts such as the risk of re-identification, utility of data after anonymisation, and the trade-off between the two. It also introduces in non-technical terms a variety of anonymisation techniques, and closes on a look at how new technologies may further push the boundaries of the discipline.

Synthetic data

One of those promising technologies is synthetic data – data that is created by an automated process such that it holds similar statistical patterns as an original dataset. Intuitively, it is easy to see how this could enable the sharing or open release of data similar to very sensitive data, but with much less risk attached to it. A project organised by ODI Leeds and NHS England looking into synthetic data about A&E admissions gave us the perfect opportunity to contribute and gather practical experience of synthetic data.

Data about emergency admissions has the potential to create insights that make emergency services better, faster, cheaper, and save lives in the process. But this data is extraordinarily sensitive, as it relates to people at their most vulnerable, and it therefore is only shared in anonymised form, under strict sharing agreement. Without equitable access, the ability to generate life-saving insights decreases dramatically. Creating and publishing synthetic data might help amateurs and professionals alike create models and tools safely, with future potential to be applied on the actual data.

Our contribution to this project was two-fold. First, we explored using a number of ODI practical tools such as Data Ecosystem Mapping to draw a threat model for this synthetic data – creating a variety of scenarios of what could possibly go wrong by looking at all the actors in the ecosystem of data and value around this data. We presented our findings in Leeds at a workshop organised by ODI Leeds, and used the experience when creating a prototype companion to the Anonymisation Decision-Making Framework, the comprehensive guide produced by the UK Anonymisation Network. The prototype is still a work in progress – we will be iterating it and will publish it soon.

Finally, we also created a tutorial on synthetic data. Aimed at developers and the more code-savvy data practitioners, the tutorial walks you through some of the steps followed in the synthetic A&E data project.

Making data processing and sharing more trustworthy

Through this research, we did confirm that anonymisation and synthetic data are among the tools and techniques with potential to make data processing and sharing more trustworthy, by protecting data subjects from re-identification and other harmful incidents. Our work also uncovered a very significant gap in knowledge and understanding between a small group of experts thinking about cutting-edge techniques, and the majority of data practitioners, often confused about best practices around personal data.

We hope the resources created through this project will help create a stronger data ecosystem, where data about people can be used in ways that are safe and trustworthy.

If you wish to build on this work, the register of actors started through this project may be a good starting point – and we could use suggestions of more organisations around the world who can help with anonymisation.

And please get in touch if you have a success story to share about making data more open while managing the risk of re-identification.

Could data be the key to solving England’s inactivity problem?

By Richard Norris, OpenActive Programme Lead

Today Sport England and Mims Davies, Minister for Sport and Civil Society, have called on the sport and physical activity sector to embrace the digital revolution, announcing a commitment of £1.5 million of National Lottery funding to the ODI to help providers innovate using open data.  This announcement has highlighted the crucial role of data in tackling growing levels of inactivity.

Sport England’s latest Active Lives research shows that although activity levels are rising, there are still 16.8 million adults in England who aren’t reaching the threshold of 150 minutes of moderate aerobic activity a week required to stay healthy.

People face ongoing barriers including not knowing what opportunities are available – and busy lives can mean that physical activity is less of a priority.

Since its launch in 2016, OpenActive – the open data initiative stewarded by the ODI that is set to receive the funding – has made great strides, including opening up more than 170,000 monthly activity sessions (such as afternoon fitness classes or opportunities to book a sports court or pitch), and bringing together organisations from across the sports and physical activity sector to publish and use open data.

What is OpenActive?

A collaborative approach

A great deal of progress has been made, but there is still more to do. For us to demonstrate to the sector what is truly possible, we are ramping up collaboration within the sport and physical activity community, and supporting major campaigns that use open data.

National Campaigns

Public Health England (PHE) is currently using open data in its Change4Life campaign, which is designed to get children more active. The Change4Life activity finder is live on PHE’s website, and OpenActive is supporting its quest to get better activity coverage nationwide, with coordinated support from leisure management systems, Gladstone and Legend.

Sport England’s national campaign This Girl Can is committed to using open data, and will be working with OpenActive in the coming months to highlight more opportunities for women and girls to get active.

Ordnance Survey’s (OS) current campaign #GetOutside aims to help people discover the benefits of outdoor activity. Using open data, OS plans to provide additional routes for people to walk, run and cycle. OpenActive will be supporting OS and other outdoor leisure activity providers, helping them to publish open data and develop standards for publishing data about outdoor activities.

Booking

Alongside national campaigns, OpenActive will make it easier for the sector to safely publish and use open data. We are developing a set of standards for data users, publishers and management systems to use for booking services. This will ensure that different apps can be linked together so that they are easier for consumers to use, and involves working with data users (who pull together different datasets) to run pilots.

The aim is to work collaboratively with a range of organisations, including activity providers, system providers and data users to explore different business models and get people more active. For example, PHE is keen to add a booking service to its Change4Life activity finder. This will make it easier for people to find and book sessions, while measuring the effectiveness of the campaign by monitoring behaviour changes.

Community supporters

At its core, OpenActive has always been an ambitious community initiative. Funded by Sport England and stewarded by the Open Data Institute, it continues to be a joint effort and we have been working with a variety of community supporters including:

Get involved

For members of the sector who are not currently involved in our initiative, you can get involved in one or more of the following ways:

  • Activity providers (you deliver physical activities and hold the data on these opportunities)
    • Publish the data you hold and make this a strategic requirement for your booking system provider
    • Engage in the process of defining the development standards with us
    • Act as ambassadors for OpenActive for your sport or physical activity to encourage more people to join in, use and publish open data.
  • Active Partnerships and local networks
    • Make OpenActive a strategic priority and act as ambassadors for open data in your region
    • Support activity providers in your area to open up their data
  • System providers
    • Add OpenActive functionality to your system – this includes enabling open opportunity data and bookability for your customers
    • Encourage and support your customers to enable this functionality – share the possible benefits with them,
    • Engage in the process of defining the standards with us
  • Data users (you see the value in using opportunity data opened by activity providers)
    • Create services that help people get active, and help prove the value of open data
    • Tell us which different and new datasets can better support your offer so that we can improve what we do
    • Engage in the process of defining the standards with us.

Get involved

We want to build an initiative that works for the entire sector, so if you would like to share your views, get involved, or improve your data knowledge, we’d love to hear from you! Email [email protected] to contact the ODI team.

Data and diversity: views on approaches and ‘uncomfortable truths’

What are the incentives for measuring diversity? How can diversity data be collected safely and used well?

These questions and more were discussed on the Data and diversity panel at the ODI Summit 2018, chaired by ODI Head of Content Anna Scott.

Speakers included: Amy Turton, Diamond Project Manager at the Creative Diversity Network (TV industry’s diversity monitoring project on behalf of the five main UK broadcasters); Christine Forde, Workforce Equality, Diversity and Inclusion Manager at Greater London Authority (GLA); Georgia Thompson, Civil Engineer and STEM coach; Mark McBride-Wright, Managing Director, Equal Engineers; and Zamila Bunglawala, Deputy Director of Strategy and Insight at the Race Disparity Unit, Cabinet Office.

Uncomfortable truths

Zamila Bunglawala, Deputy Director of Strategy and Insight at the Cabinet Office’s Race Disparity Unit, opened by discussing the government’s Ethnicity facts and figures website, which publishes UK ethnicity statistics as open data. “We in the UK have the best equality data in the word – that is true,” she said, “but where is it published, how accessible is it, and can non-expert users understand it? They were the key questions.”

Another key aspect was to identify “uncomfortable truths,” said Bunglawala. “We do know in our society that ethnic minority groups fare worse in certain industries and sectors – for example in education and employment. What we didn’t know is how widespread those disparities are.”

The project was a huge undertaking: “Doing a stocktake of all government [diversity] data, and then putting it on a website is massive: no government has done it before.” Bunglawala explained that accessibility was crucial. “We tried hard – partnering with the ODI – to make it accessible so that in addition to academics and expert users, the public can actually utilise this data.”

The goal is for people to use that data to take action, explained Bunglawala. “And since we built the website last year, we’ve announced various policies: on school exclusion issues; on mental health issues; and on improving employment for ethnic minority groups,” she said.

“The one I’m most proud of is a consultation on ethnicity pay reporting, building on gender pay reporting,” she said. “And I think this is the way forward for data, especially on inequalities – to actually make it accessible, to make it transparent, and to allow any user to download it, take it away and do something with it.”

Initially, there was resistance to building the website: “Government departments said ‘we already publish this – what’s the issue?’; diversity groups said ‘we don’t need a website, we know what the problem is’,” said Bunglawala.“You have to take people with you on the journey. People will not trust you if they don’t know what you will do with the data. Now we’ve built it people love it and I’m really proud of that. People can now see what it’s leading to because they can access it for the first time.

“If they see themselves as users and beneficiaries of that data, maybe that will lead to better collection of data. But we don’t talk about data in that way yet. That’s part of the challenge.”

A reflection of society?

Amy Turton (Diamond Project Manager at the Creative Diversity Network), said that similarly to government data, the broadcasting industry collected diversity data, but wasn’t being held centrally or consistently collated. “The idea was to put together one single database of who is working in television across the UK.”

And the benefit to broadcasters? “TV content is obviously about good ideas and creativity, and that requires new ideas and new people entering the workforce,” she said. “They wanted to understand where they were missing an opportunity, and where certain protected characteristics were being underrepresented.”

Audience expectation is another important element, said Turton. “One of the questions that we try to answer through the data that we collect is – are people seeing representations of themselves on screen?”

The project has already seen results: “The project has brought the whole industry together to talk about diversity –  and the production sector. It’s changing the conversations that broadcasters can have with the production companies.”

Transparency and accessibility are important, said Turton, making the point that previous reports could be lost or overlooked, whereas the new system is accessible, transparent and allows for comparison, monitoring and impact assessment over time.

“It hasn’t been an easy thing to do,” she said “especially as the broadcasters are competitors. But it is one area where they were keen to work together, recognising that it would be beneficial for all of them.” The fact that the broadcasters already used a shared platform to collate data about the programmes, including details of staff and actors, helped to streamline the process. “It was a case of expanding the system that they already had,” said Turton.

But the tricky part is deciding what to measure. The Creative Diversity Network collects against six protected characteristics: gender, gender identity, ethnicity, age, sexual orientation and disability. Turton noted the importance of collecting data ethically and effectively, adding: “It was really important that people knew how we gather the data, how it would be stored, and what we would do with it. That trust is really important.”

The changing landscape of protected characteristics can make it hard to communicate diversity data, said Turton. “A lot of protected characteristics are mutable and different year-to-year which makes people think it’s wrong, rather than question why it might be different.

An additional issue is that “most people don’t know what to do with data”. Turton noted that people are unsure of how to collect, store and use the data effectively, pointing to a need to improve data literacy.

Measuring and reframing

The – often overlooked – fact that improving diversity benefits everyone, is an important point in the diversity discussion. “It’s a shame that you have to trundle out the McKinzie report analysis that shows, on aggregate, that more gender-diverse boards have greater returns,” said Mark McBride-Wright of Equal Engineers, adding that highlighting these universal benefits is important, particularly to help engage privileged groups in diversity discussions.

“Unfortunately for the group that need to be convinced [business leaders], they sometimes need to have a personal activation point to start emboldening and fully supporting diversity and inclusion initiatives,” said Mark McBride-Wright. “And it’s usually because they’ve got a daughter, or someone who’s experienced some inequality: they all of a sudden become supporters.”

The conversation has to be conducted in a way that “doesn’t shut down the white, cisgender, able-bodied men in the profession,” he noted. This awareness and acknowledgement of privilege is also crucial: “They have to accept that they do have privilege, and it’s about creating a way that they can use that position of privilege to the advantage of others. But you have to create active listeners first.”

To address the (very human) ‘what’s in it for me?’ question, McBride-Wright suggested “flipping the gender conversation on its head and focus on masculinity in engineering”. As an example, he said there is a lot more engagement when focusing on how the gender pay gap also has negative effects on men – in terms of overwork and skewed expectations – which can be attributed to the gender pay gap. “Helping the executives have that personal connection – to override having to quantify it as a return-on-investment – to help them empathise, is one of the most successful points, said McBride-Wright.

“That’s why I focus on diversity of thought and experience,” he said “because health and wellbeing affects you irrespective of your gender, ethnicity, sexual orientation etc.” Adding that, when trying to bring along a wider audience “…reframing the conversation proves to be more effective,” he said, rather than focusing solely on the benefits to particular underrepresented groups.

He discussed the natural fit of engineering and data. “Demystifying is the key. In engineering, we love measuring and monitoring,” he said, and also noted the business sense in gathering and acting on diversity data. “Diversity seems to be the only thing that isn’t treated in such a way – like we would safety or quality. What other business process would you not baseline to know exactly where you are now, then bring in interventions, then reevaluate further down the line?” he asked.

McBride-Wright also discussed the importance of having a truly comprehensive diversity policy, noting sexual orientation, disability and other non-surface characteristics as the ‘hidden elements’ of inequality that must be included in any policy design. “Too many [policies] are reactive and just look at gender and race,” he said, and noted that building trust within workforces relies on faith in the robustness of the process.

Inclusion–diversity balance

“I think the focus needs to be more on inclusion: diversity in recruitment is a quick fix,” said Civil Engineer and STEM coach Georgia Thompson. “Inclusion is harder to measure,” she said, adding that it is worth giving it the time and attention as it can have a self-perpetuating effect. “If you have an inclusive culture and environment it will naturally attract diversity, because the people that come in will feel more comfortable.”

Christine Forde of the Greater London Authority agreed: “When places aren’t inclusive, people leave,” she said. She noted the role of training, in particular around unconscious bias, to help people understand how bias might affect their decision-making processes. Diversity specialists need “to work with leadership and managers to help them understand what an inclusive culture is, and how people feel,” she said.

Thompson agreed, noting that senior-level sponsorship is a vital element to a successful diversity and inclusion strategy. Board-level colleagues needs to be committed to the BAME [black, Asian and minority ethnic] groups within organisations, explained Thompson. “If the board-level staff are not committed to them, they’re ineffective – they don’t have any power or influence over changing that culture.”

Tokenism is also a pitfall that many organisations stumble into, noted Thompson. “It’s dangerous to rely a small group of people to contribute to these decision factors  – an experience from one person from a particular group doesn’t reflect everybody,” she said, adding that this can happen when diversity is seen as a tick-box exercise.

“What gets measured gets done”

Building a successful diversity and inclusion agenda involves having “real leadership and a real belief that it’s important,” said Forde.

How to cultivate successful diversity in companies? “That’s where the data comes in. You need to appreciate what you organisation actually looks like,” said Forde. She pointed to campaigns that have been successful: “The gender pay gap data has really made organisations take note,” she said: “It’s that burning platform that you need to propel action.”

But as well as reporting you need to act, she said. “You need to close that gap. It’s about translating the data into achievable actions and monitoring that they are delivered. Governance is crucial. It’s the old adage: what gets measured gets done.”

“As part of the mayor’s ambition to lead by example, the GLA published its gender pay gap data, a year before it was legally required. This year we’ve published our ethnicity pay gap – before it was legally required.”

“As a result of the ambition to make the organisation as diverse as possible, in terms of representation – but also as inclusive as possible in terms of culture – we set up diversity and inclusion management board,” said Forde.

A success factor is the senior-level representation on that board. It is chaired by the chief officer and “the mayor’s chief of staff and the staff networks chairs are on the board. We have parity of influence and can hold people to account on actions,” she added. “We have indicators on our dashboard – and is absolutely right that they are there as part of core business.”

Nudge and comfort

Discussing the ethics of diversity data collation, Bunglawala stressed the importance of data and digital standards when building the Ethnicity facts and figures website. “We spoke to GDS [Government Digital Services], to the ONS [Office for National Statistics], and the ODI,” she said, noting that “a website about data will only be useful if people trust it – and that has got to be based on what standards it follows.”

User testing in the design phase was also crucial. “Academics, NGOs, policy officials, local and central government, members of the public, expert groups. We tested with all of them, and we said ‘tell us what you want to see on this website’.

“The data has to inform your policies,” she said, adding that people “shouldn’t be afraid of targets – they’re not quotas.” She recalled the reaction when there was a target around women on boards – some people said it was unethical. “We were told: ‘You can’t do that because it’s tokenistic’. Turns out it was a very good move, as we have lots more women on boards.”

Although the civil service is increasingly diverse at the junior levels, “it is not very diverse by ethnicity at the senior levels.” This needs addressing by setting targets, outreach work and supporting people within the organisation. “We also need to encourage boards to offer paid board roles, not just expect them to be volunteers – that could be an inhibitor.”

Small-scale tactics can also work. “We need nudge; we need incremental changes.” The aggregate effect of many small nudges – eg ‘How diverse is this meeting?’ posters – can be very powerful.”

Thompson also agreed that targets and visions are crucial: “There’s an assumption that things will just change – that it’s natural that things will change. But they have to be targeted” she said. “We need actual numbers. You can’t just say ‘we want to improve’. You can’t hit a target you can’t see.”

How we used the ‘styles of government’ tool to explore trade competitiveness

International trade issues are all over the news. Brexit and relations between China and the United States are dominated by how easily companies in one country can access markets in another, and whether they have the competitive advantages to win customers and grow

Governments have a role in boosting the competitive advantage of their country’s exporters, but how do they do it?

At the Open Data Institute (ODI), we are in the middle of a research project on how the quality of a country’s data infrastructure might affect its trade competitiveness in data-enabled goods and services. We recently needed to map the interventions that governments in places such as Australia, Canada, and the United Kingdom are making to boost trade competitiveness, and realised that’s not as easy as one might think.

Styles of government intervention tool

The UK Cabinet Office’s Policy Lab styles of government intervention tool offered a good way to do that. It’s an application of service design to government which shows the ‘patterns’ in public interventions. It is actually more like a toolbox than a tool, in that you use it by working out which category a government intervention fits into, leading to a full view of government  roles – like stewarding or regulating, in nascent or mature areas. We’ve been keen to use the tool for a while.

Table showing seven styles of policy making
Image credit: UK government Policy Lab

 

The result is our UK styles of government intervention in trade competitiveness document, which collects things like the UK Export Finance trade fairs, the Africa Infrastructure Board, and National Trade Academy Programme in one place. Using the tool quickly gave us a broad view that showed us the government’s multiple roles.

Read and comment on 'UK styles of government intervention in trade competitiveness' here

Boosting trade competitiveness

We focused on things the UK government could affect in the short term to boost trade competitiveness, rather than things that it could change through amending international agreements. You might say that we should have mapped interventions by international bodies on top of those by UK bodies, but that might have made the results unwieldy before we had understood them.

We also didn’t investigate sector-level government interventions or consider how public institutions such as the law courts and universities play a role. They affect trade competitiveness – eg maintaining a stable business environment and raising workers’ skill levels – but including them would have likely led to just filling all the boxes, perhaps reducing the tool’s usefulness.

We found it easy to populate the categories, and it was interesting to see that the UK’s trade competitiveness interventions seem to cluster around the UK government as a steward and leader – both at the low end of the intervention scale – little as a regulator, and with some significant interventions as a legislator. This makes sense in trade, given that a state with a worldwide network of connections such as the UK’s would use them to help domestic producers to learn about foreign opportunities. And given that setting trade rules between developed countries are likely to be at least a bilateral process, there seem to be only a few ways in which the UK government acts as a regulator by itself, such as the Trade Remedies Authority’s role in protecting domestic industries against unfair competition.

There are things that the tool doesn’t show – by design. For example, the amount of spending on interventions in one category might be much bigger than in another, but the latter might have a greater number of interventions. Our allocation of some UK government interventions to one category rather than another is also debatable – Economic Partnership Agreements and bilateral investment treaties seemed best captured by the tool’s description of interventions that are about ‘reviewing, identifying, and prioritising key opportunities with strategic value’ but perhaps they’re just another way for connecting networks for ‘scaling, mainstreaming, and market building’.

Data portability, and exploring open data changes in Ukraine

We also played with the tool in a couple of other areas of interest to the ODI: data portability, and the open data and government transparency changes in Ukraine:

  • The former helped us to see more clearly that government interventions for data portability are asking officials to take stewarding and leadership roles while updating legislation on data protection and related areas at the same time. Perhaps a new way for spotting when legislators might be more likely to write flexible laws that need to be interpreted, rather than be highly prescriptive.
  • The latter helped us to map what we expected in Ukraine: that the drive for open data and transparency in Ukraine was new, and that reformers and international development organisations have been making a raft of early-stage interventions.

Next steps

The styles of government tool has given us some more organised insight into our areas of interest, and we’ll be applying it to the trade competitiveness interventions of more countries as our international trade project continues. At the moment it looks like it could be most useful at two points: when we’re trying to get a handle on what a government is doing, which is what we have done so far; and when we have a much deeper understanding and are able to perhaps develop ‘intervention models’ that show policymakers the mix of roles they might adopt to achieve something.

Get involved

You can help us and the wider community that might be interested in the styles of government tool, by commenting on our use of it, suggesting better categorisations or telling us where we have missed things. Policymaking is naturally a question of things like rules, incentives, and coordination, but using design to identify roles looks to us like a handy approach that we would like to get better at, and help others to do the same.

Mapping the UK’s trade competitiveness interventions has helped us in the past few months. As we use the tool more on our project, we’ll share more applications.

Comment on 'UK styles of government intervention in trade competitiveness' here

Data trusts: what’s the economic function?

As part of our research into whether data trusts are a useful way of increasing access to data while retaining trust, we commissioned London Economics to carry out an independent assessment of the economic function of data trusts

The government is pursuing a target for the “UK to be the world’s most innovative economy”. Innovation in the field of AI, which is seen as a key source of future innovation, is part of this. This means more and better AI-related R&D being done in the UK, which implies lower cost of AI-related research (including easier access to data) and more and better data to be used as the basis for such R&D.

This assessment of the concept of data trusts focuses on their economic function. The rationale for government support for data trusts as new a data sharing mechanism is framed in terms of their contribution to productivity, and development of AI and data-driven innovation more broadly.

Independent assessment of the economic impact of data trusts

See all data trusts research

Greater London Authority and Royal Borough of Greenwich pilot: What happened when we applied a data trust

As part of our research into whether data trusts are a useful way of increasing access to data while retaining trust, we piloted three uses for them in the real world. Here is an overview of what happened when we applied the method to data about Greater London Authority and Royal Borough of Greenwich

This report summarises the work and outputs from the Greater London Authority (GLA) and the Royal Borough of Greenwich (RBG) pilot run by the Open Data Institute to explore whether a data trust model could support sharing of city data. The pilot also supports a commitment in ‘Smarter London Together’ to examine a data trust for AI.

Two use-cases explored were:

Mobility use-case (parking) – This use-case was to trial technology that increases available data on parking in the Borough, in relation to coach parking and spaces that are reserved for electric vehicles and electric vehicle car clubs, with the aim of making less-polluting transport options more attractive.

Energy use-cases – This use-case was to improve the energy efficiency of a council-owned social housing block through installing sensors to monitor and control the activity of a retrofitted communal heating system (a water source heat pump).

We used a number of methods to explore a data trust in the context of the Sharing Cities programme. We conducted interviews with key individuals, in a co-creation workshop with the GLA and RBG, in a workshop with potential data users and through the work that Involve led with citizens. We also developed a draft data trust canvas to test some initial ideas for a data trust.

Explore our research

This report summarises the work and outputs from the Greater London Authority (GLA) and the Royal Borough of Greenwich (RBG) pilot run by the Open Data Institute. The aim of this pilot was to understand the feasibility of creating a data trust in relation to two of the Sharing Cities programme use-cases. The pilot also supports a commitment in ‘Smarter London Together’ to examine a data trust for AI.

Read report

Involve was tasked with designing a decision-making process for the pilot data trust for the Greater London Authority (GLA) / Royal Borough of Greenwich (RBG)

Read decision-making report

BPE Solicitors LLP  explored the legal and governance considerations for implementing a data trust for the Royal Borough of Greenwich and Greater London Authority.

Read legal report

Illegal wildlife trade pilot: What happened when we applied a data trust

As part of our research into whether data trusts are a useful way of increasing access to data while retaining trust, we piloted three uses for them in the real world. Here is an overview of what happened when we applied the method to data about the illegal wildlife trade

During the first three months of 2019, the ODI worked in partnership with WILDLABS Tech Hub exploring whether a data trust – a legal structure that provides independent stewardship of data – could be of benefit to the vast array of data creators, data providers, data users and potential data reusers working to tackle the illegal wildlife trade (IWT) around the world.

WILDLABS Tech Hub suggested two use cases to us as areas where a data trust may have value. Use case one posed the question as to whether a data trust could be formed to assist with the sharing of image data in order to train recognition algorithms to assist border control officers with identification of illegal animals and animal products. Whilst use case two asked us to consider whether pictures taken by camera traps and acoustic sensor data can be shared to train algorithms to help create real time alerts.

This report seeks to explore data trusts in relation to the ODI and the Office for AI’s work on data trusts.

ODI Report: Exploring the potential for data trusts to help tackle the illegal wildlife trade

See also: