Nigel Shadbolt

In 2012 Tim Berners-Lee and I founded the Open Data Institute (ODI). One of its main objectives: to develop and foster a trustworthy data ecosystem.

Since then, we have made real progress in the use of data to help the public and private sector, citizens and consumers make better informed decisions. We can plan our travel more efficiently, we have access to a wide range of environmental data, from flooding to air pollution; we are able to evaluate healthcare provision and much else.

But the world is now very different from seven years ago. In that time the UK has had three Prime Ministers, a significant Referendum on our membership of the EU and two General Elections, with another soon to come. We are increasingly aware of just how much data about us and our friends and families is being collected. And with that there is a growing unease around the concentration of data in the hands of a few on-line platforms, and the use of that data to target us.

It is a widely held concern. Across the world, people are increasingly wondering how data about them is being used to influence their buying choices and opinions. Just last week, the ODI supported calls by the Mozilla Foundation for an emergency moratorium on all political and issue-based advertising on Google and Facebook platforms until after the General Election. Neither government nor political parties have acted. We are now reliant on groups such as Who Targets Me, and broadcasters including the BBC and Channel 4 News to track, analyse and expose how political parties are advertising on Facebook.

A survey out today by YouGov (for the ODI) shows that, whilst the majority, 87%, of the public think it’s important for organisations to use data about them ethically, most are unconvinced that government, and technology companies will actually do so. Just 31% trust government, 5% trust social media organisations, and only 3% trust marketing and advertising agencies! The ethical use of data is fundamental when microtargeting or identifying the interests of specific individuals or very small groups of like-minded people, so as to influence their thoughts or actions either for political or commercial purposes.

Recent focus group research showed that people aren’t necessarily always against this practice, for example when it's used to recommend the next film they should watch. If people know who’s doing the targeting, and can see clear benefits for them, they sometimes say that they are comfortable with it. With political advertising, people are much less sanguine.

But, the reality is that, despite a growing unease with the use and misuse of data for targeting specific groups with particular messages, nothing has substantially changed. Today, I believe, the murky world of political adverts on social media is potentially damaging our democracy. At the very least, the public has a right to expect the same level of content moderation for political ads online as would be expected in print. The large tech companies repeatedly try to avoid responsibility by claiming they are not content providers, but that is patently untrue when the content we see is determined by the algorithms which they implement and use!

People have a right to know where political adverts originate, who is paying for them and how they are being targeted. In 2018 the Information Commissioner’s Office (ICO) published Democracy Disrupted? a review of the use of personal information in voter targeting. It concluded that Facebook had not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign.

The ICO produced a clear set of recommendations, including an urgent appeal to introduce a statutory code of practice for the use of personal information in political campaigns. But as an election approaches, our regulatory bodies, including the Electoral Commission, do not have the powers to deal with the challenges of microtargeting. At the ODI, we have asked twice for these issues to be dealt with - in 2017 and again recently.

The personal information that is used, including information that is inferred about individual members of the electorate, must be used in line with the law. We are still unsure about the impact of targeting technology and there is a very real need for research into its effects: some people say its impact is unproven, others that it can have a big effect in marginal areas.  While the impact of microtargeting in political advertising is left unproven there will always be questions about the legitimacy of the outcome of elections.

Which is why an emergency moratorium was suggested. There is precedent. In both the Israeli and Canadian elections, political ads were blocked for the duration of the election period. Whilst in the 2018 Irish referendum on abortion rights, Google blocked political advertising two weeks before polling day.

The blocking of political and issue-based advertising is not a long-term solution. Such measures may impact on the ability of smaller campaign groups to have their voices heard. But currently, with electoral laws that MPs themselves have described as “unfit for purpose”, there is a danger that public trust and confidence in the broader democratic process is damaged. Until we update these laws, implement in full the ICO’s recommendations, and understand the actual impact of such methods we should proceed with caution.

-----

Professor Sir Nigel Shadbolt is the Executive Chairman and co-founder (with Sir Tim Berners-Lee) of the Open Data Institute. He is Principal of Jesus College, Oxford, a Professor in the Department of Computer Science at the University of Oxford and the co-author of The Digital Ape: How to live (in peace) with smart machines (2018).

----

This opinion piece was first published in The Telegraph on 12 November 2019: The murky world of political adverts on social media may damage our democracy