reseau

By Jeni Tennison and Fiona Smith

A lot of the coverage and discussion about the Cambridge Analytica / Facebook scandal has focused on how people can control what data about them is captured and shared.

It is interesting to look at the scandal another way: imagine a world of total surveillance. If we had no control over what data was collected about us, what expectations and control would we want over how it could be used? How would we monitor and regulate its use?

Data for targeting adverts

Cambridge Analytica did not simply receive data about lots of people, they allegedly used it to help target different political adverts to particular types of people during elections and referenda in Nigeria, Kenya, the US and the UK.

People placing ads on Facebook have various ways of targeting them. They can specify categories of people defined by Facebook itself. They are able to use third-party providers such as Experian and Acxiom to create more targeted lists.

Facebook recently announced they would stop integrating these third-party services into their platform, although advertisers would still be able to use those third parties within their own systems before sending a custom audience request to Facebook. They can also gather data about Facebook users through free quizzes or games, and use that data to define a custom audience, which they can use themselves and share with other marketers.

In our thought experiment, we assume there is no way of avoiding people having enough data about you to target ads at you. In which case, what constraints and controls would you want to be in place about how they could use that data?

Controlling targeted advertising

Different countries have very different approaches to regulating advertising, particularly political advertising. For example, several countries do not allow political adverts to be broadcast through the TV or radio to prevent elections being swayed towards those with the biggest pockets; in other countries this blanket restriction is felt to limit debate. Organisations that advertise in different jurisdictions have to be aware of different laws that may apply in different places. These laws are still adapting to the age of the internet and social media, as the Sunlight Foundation have found.

From a data perspective, the narrow targeting of adverts raises particular concerns.

First, regulations and accountability around advertising assume it happens in places that many people can see, such as in magazines, on billboards or on TV. Adverts are not generally vetted by regulators before being shown. Instead, regulators rely on people complaining about adverts that break advertising guidelines. But when only a small proportion of the public see ads – a sub-group most likely to be sympathetic to the ads they see – questionable ads are unlikely to be reported.

Targeted political adverts mean that people are less exposed to or aware of other points of view.

Second, targeting adverts introduces bias by restricting access to information. This can apply in adverts for goods and services: Facebook is currently being sued by the National Fair Housing Alliance in the US for continuing to enable real estate agents to explicitly target adverts away from women, families with children or people with disabilities. Propublica has similarly investigated the ability to target job ads using Facebook to exclude certain demographic groups. Even when the explicit ad targeting terms used appear fair and reasonable, they may lead to effective bias if those terms are highly correlated with other factors.

Regulation around the content of political advertising is typically light-touch, based on the idea that politics is a marketplace of ideas. Targeted political adverts mean that people are less exposed to or aware of other points of view. This may reinforce existing views and worsen political divides. A social experiment in Colorado on what happens when people discuss politics with like-minded people found people were more likely to become more extreme in their views, while increasing the divisions with other groups. It might be annoying to see ads expressing opinions we don’t like. But good democracy and social cohesion requires us to be exposed to different ideas through what Cass Sunstein calls ‘chance encounters’.

Ad platforms can help or hinder regulation

Open data from Facebook and other platforms about which ads are placed – by whom, where, at what cost, with what explicit and effective targets – would enable organisations like consumer rights groups, investigative journalists and regulators to monitor how ads were targeted and what money was spent.

Transparency about political advertising is needed now, globally, retrospectively, and from other ad platforms as well as Facebook

This should support reporting to regulators and action being taken against organisations who break local advertising regulations or target ads in discriminatory ways. The data would help support policymaking to update advertising regulation, for example to prohibit microtargeting of political ads, as the Web Foundation has suggested.

In October 2017, Facebook announced steps to provide more transparency around political advertising. In the future, political advertisers will have to verify their identity and say who paid for the advertisement. Facebook says that it will use machine learning to help find political adverts and enable people to see which ads a particular user or group has placed, regardless of whether the person viewing was targeted. A searchable archive of these ads – including how much was spent, how many people reached, and their demographic profile – should be available for the US by June.

But to inform the current investigations and conversations about the impact of Cambridge Analytica and other organisations on elections and referenda around the world, transparency about political advertising is needed now, globally, retrospectively, and from other ad platforms as well as Facebook.

Decisions about what is acceptable in advertising and political advertising – and what counts as a political advert – have to be made locally, by the affected communities and countries. As representatives of Myanmar civil society organizations wrote in response to claims Facebook tools were used to spread dangerous misinformation:

“We urge you to be more [intentional] and proactive in engaging local groups, such as ours, who are invested in finding solutions, and – perhaps most importantly – we urge you to be more transparent about your processes, progress and the performance of your interventions, so as to enable us to work more effectively together.”

Facebook and other ad platforms cannot fix all the problems they create alone. They need to collaborate with civil society, regulators and researchers to design solutions together. Publishing open data about all the ads it carries is one mechanism that could enable others to play their part in understanding and acting on abuses.

Data Facebook collects can be used for much more than advertising

We have concentrated here on how the data that Facebook collects can be used for targeting advertising. But it can be used for much more than that; it can be used to assess people’s eligibility for insurance, their suitability for a job, their credit score to support loan applications, and whether they are allowed to enter other countries.

As Ellen Broad has said, in these scenarios, the accuracy of the data that Facebook and other similar platforms collects and provides becomes much more important. We need additional rights – such as those within the General Data Protection Regulation – to enable us to check and correct data held about us, and to understand when data is used to make decisions about us.

We framed this as a thought experiment because we are not (yet) completely surveilled. However, we cannot have our cake and eat it when it comes to data. We enjoy the benefits that data brings us: personalisation, insights, decision support, more efficient services, and economic growth through innovation.

Quitting Facebook cuts us off from our family and friends. It’s not a practical solution for many people around the world who rely on Facebook to be connected with the online world. Withdrawing consent for data to be collected about us also limits the benefits that data can bring us. For those that use data – including in ways that benefit all of us – it introduces blind spots and biases. This is of particular concern in the age of AI and automated decision-making. Our voice might not be heard if we don’t appear in the data that drives decisions. Being counted is important, and is still a luxury for many of us.

If we are not to cut ourselves off from the benefits data can bring, we must have controls, good governance and transparency over how data is used as well as what is collected and shared.

You can read all of our thought pieces and interviews on the Facebook and Cambridge Analytica story in our ODI View page. If you have comments or experience that you’d like to share, pitch us a blog or tweet us at @ODIHQ.

If you are getting to grips with ethical data use in your organisation, the ODI offers workshops, coaching and resources designed to help you navigate them. Speak with us at [email protected] to find out more.