Alistair Gentry is the ODI’s Embedded Artist in Residence. The role, which attracted artists from across the globe following an open call, will focus on themes the ODI is exploring in its R&D programme.
We’ve invited Alistair to consider subversive, creative, alternative, humorous or tactical methods for making data accessible – in a way that protects citizens’ privacy where necessary – and to hone into the social and cultural questions that arise from attempts to do this while maintaining trust. He will work alongside ODI researchers on projects which aim to advance knowledge about data and how it is shaping the next generation of public and private services.
Alistair will be delivering an ODI Fridays lunchtime lecture on 2 November.
In theory, trust is pretty simple. I do what I should and I try not to let you down, and vice versa. We have a vocabulary for trust and distrust on a relatively day-to-day, domestic scale; he’s shifty, that company looks dodgy.
Obviously, the reality tends to be much more complex and asymmetrical than simply trust, its absence, and our instincts. We all know that our trust can be betrayed, or that our trust can be foolish with the benefit of hindsight. The risks, costs and side-effects of misplaced or mishandled trust certainly fall much more heavily upon some people and some sectors of society than others.
These are some of the things that are on my mind as I begin my role as research artist in residence at the Open Data Institute. As an artist, I can be more conceptual and less pragmatic than some of my colleagues are – but the underlying issues are exactly the same.
The ‘tech bro’ mentality that seems prevalent in data handlers like Facebook is inextricably bound up with the impact of their technologies on nearly all of us – either by ideological (usually Silicon Valley-inflected libertarian) design or by privileged, unconscious ignorance.
“I don’t think I have anything to hide, and if you have nothing to hide you also have nothing to fear,” is a sentiment certain people are privileged enough to utter in all seriousness.
They might feel differently if they were an investigative journalist in Malta or a Russian political activist. Or, like me, a gay man who could be murdered in certain countries if the authorities, quasi-political thugs in collusion with them, or randoms on social media knew my sexual orientation and location simultaneously.
Three EU member countries (Poland, Latvia and Lithuania) are among the worst places to be an LGBT person on the entire European continent, in terms human rights violation and discrimination, according to Statista.
The gay men who were abducted, tortured and murdered in Chechnya in 2017 were mainly identified and ensnared via their social networks and phones.
Over a hundred people fled Chechnya as a result of this data-driven purge, most preferring to leave the Russian Federation entirely, an Amnesty report found.
In short, even if I never plan to set foot in Chechnya, I can’t afford to be as blasé as some people are about who has hold of data related to me, and what they do with it.
In a developed country in 2018 it’s trite to mention our lives being extensively monitored and recorded thanks to our constant use – and the mere passive presence on or near our person, in some cases – of phones and smart devices, alongside online banking, government, services and shopping.
Data from any or all of these uses and technologies is actively interlinked. Most people are conscious of it to a degree, but few are conscious of its full extent, reach and implications. If they were made aware, they’d be amazed, or aghast.
Invisibles and untouchables
Another aspect of privacy and data ownership that rarely gets much attention is the prospect of new invisibles or untouchables. In a world where everything is stored and enacted online – including huge chunks of your identity and your ability to live, work, express yourself, or exercise your rights – what becomes of people who, through choice or circumstance (poverty, for example), have no smartphone, no contactless cards, no credit history, no regular or reliable internet access?
This applies to many people in the UK, let alone developing or authoritarian societies. China and many Arabic speaking countries maintain pervasive blocking of whole swathes of the open internet and certain subjects in particular. Iran and Myanmar, among others, have intermittently in effect pulled the plug on internet access, in the face of public protests or political upheaval.
Understandably, ordinary citizens in these countries tend not to trust their governments or the companies that facilitate them. There’s a lot to be gained from keeping one’s digital footprint minimal in many parts of the world, but this also keeps people – and their governments and businesses – from the benefits of the online world, and a better understanding of societies and their needs.
It’s increasingly being reported that users are more inclined to share the most personal data about themselves when they think no human will see it. The rise of artificial intelligence and machine learning both eases and exacerbates trust issues around personal data.
An algorithm is just a set of instructions about how to do something. If it’s going wrong, it’s because somebody made it wrong
On the one hand, the hope is that machines can gather data and apply it in ways that are less prone to human error and bias. On the other, a machine is only as good as the person in control of it. The all-purpose “It’s the algorithm” is deployed as an excuse or explanation for bad decisions. An algorithm is just a set of instructions about how to do something, so if it’s going wrong, it’s because somebody made it wrong.
Having said that, the tech industry has kept mostly hidden inside it’s casual Friday chinos, the concept of fake AI. Early this year, a number of people flagged things that looked like advanced machine learning but were in fact façades or deceptive user interfaces for old fashioned human labour. These companies fake it until they make it, using low-paid “gig economy” humans to do jobs that they pretend are AI work.
For entrepreneurs who want investment but don’t want the bother of real AI, this tends to coexist with an equally cavalier approach to securing the data about their users, which is where we return directly to the concepts of trust and sharing because fake AI is a betrayal of them.
Amazon’s Mechanical Turk crowdsourcing piecework service seems to smirkingly rebadge this idea as a business USP rather than a moral quagmire; the original mechanical Turk was a late eighteenth century automaton who played (and usually won) chess against human opponents.
The supposed automaton was actually more like a puppet, with a human chess player hidden inside. Mechanical Turk also blurs into what is now being called Collective Intelligence, the combination of well-established human intelligence- in every sense of the word- and machine learning.
Collective Intelligence is also the positive way of looking at what some people have criticised as Wizard of Oz technology: actually it can be a good thing when the strengths of human ingenuity and empathy combine with the things that AI and machine learning do much better than we do.
I’ll go into more detail about what I’m researching and building as time goes on, but at this stage it suffices to say that most people (including me) couldn’t tell if I had thousands of hours of machine learning in the box, or if it was just me using age-old skills of stagecraft and interrogation; my forthcoming project will be a bit of both.
Although “Wizard of Oz” AI and similar terms are being used pejoratively by many, the Wizard ultimately gives Dorothy and her friends what they want and need, even though he’s a fraud. As users of social media and owners of vast data sets, maybe we too can learn the values of home, heart, intelligence and courage on the way. I plan to turn secrecy and deceit inside out, tricking people into being free instead of pulling the wool over their eyes. Pay a lot of attention to the man behind the curtain.
Alistair is an artist and writer who has produced work for publication, performances, broadcast and installation in the UK, Europe, Scandinavia, China and Japan. As Alistair works on creating an original, interactive artwork, he will be offering regular updates and insights into the work as it progresses through a series of blog posts.
Alistair's ODI Fridays lunchtime lecture
ODI Fridays: Exploring trust and distrust of public information through art Fri, 02 Nov 2018, 13:00 Open Data Institute, 65 Clifton Street, London, EC2A 4JE
- Data ethics and privacy