From the EU’s GDPR, extending individual rights around data, to the UK government’s commitment to openly publish parts of Ordnance Survey’s MasterMap, emerging laws and policies are changing how we interact with data. Can predictive modelling help us to predict what that impact will be?
Data policy is hard
Designing and implementing effective data policies is a significant challenge for policymakers. Data is a relatively novel area of government policy. The data economy – people and companies that produce and deliver data-related products and services – is a rapidly changing and intrinsically complex dimension of our society.
After all, most data is just an imperfect representation of society. Emerging technologies and scandals, such as the recent Facebook/Cambridge Analytica story, can create the need for rapid, reactive policy interventions while legislative changes can become obsolete even before they are enacted. However, there are new policies and legislation – from the EU’s General Data Protection Regulation (GDPR) extending individual rights around data, to the UK government’s commitment to openly publish parts of Ordnance Survey’s MasterMap – that are changing how we interact with data.
At the ODI, we work to enable better decision making around data policy.
Much of our work centres on practical advocacy – proposing, experimenting and iterating advice based on practical experience. While this approach to policy has many benefits, it is not always possible for policymakers to carry out the level of experimentation and iteration necessary. For example, policies that work in one geographical area or sector may not have the same effect in a different context.
Perhaps new modelling methods will help
In our innovation programme, as well as improving practical advocacy tools – such as the Standards Guidebook and Data Ethics Canvas – we are also exploring whether new modelling methods can usefully help policymakers make better decisions around data. This includes the recently launched second UK Tech Innovation Index, a project by the DataCity and supported by the ODI, that looked at novel techniques to identify clusters of UK businesses.
We are now starting a new project, Predicting cause and effect of data strategies, focusing on whether it is possible to simulate the impact of different data policy interventions on the data economy, using agent-based models.
Agent-based models are computational simulations of the actions and interactions of autonomous decision-making agents. Viewing the overall result of all these interactions can generate insights into the behaviour of the complex systems in which the agents act. Designing the models in this way helps to mitigate the flaws of existing models, which typically fail to fully capture the complexity of any system which involves human agents.
Agent-based models are increasingly used to simulate the effects of policy. The Bank of England, for example, used an agent-based model to help them understand what happens to house price cycles when caps are set on the loan-to-income ratio.
We need to understand the limits of modelling, and ourselves
In addition to exploring how to design and build these types of models, we are also interested in if, and when, such simulations can be useful for policymakers. Like any type of model or emerging method, we know that it will not be perfect, infallible or useful in every context. However, through practical research and development, we hope to be able to contribute to the conversations around these techniques in other sectors, and their application for policymaking. We will provide all the outputs of the project under an open licence, so that others can access, use and build upon our work.
Coming into this project, we recognise that we are not experts in agent-based modelling, so over the past couple of months we have spent time trying to understand what we could, and should, be doing during this project. This started with some initial research to improve our general understanding, conversations with experts to help direct our approach, and internal debate about what policy questions we would like to test.
Having gone through this discovery process, we are now looking to take practical steps towards testing the usefulness of these types of models for data policymaking.
We have launched an open tender for a partner organisation or organisations, to help us create a specification and subsequently build a working prototype. Alongside this, we are very keen to engage and get input from other organisations interested in working in this field.
If you have any questions or feedback – please email Philip Horgan.
If you have comments or experience that you’d like to share, pitch us a blog or tweet us at @ODIHQ.