As we become ever more reliant on online services, questions around our digital identities become more pertinent – here we explore what identity means in a digital age, and how we could develop an ethical and accessible framework for a digital identity system
So many elements of our lives are now reliant on internet services – we buy things, pay taxes, make friends, run businesses, communicate and book holidays online. Nearly half of the world’s population is online.
These services and interactions require some form of identity verification. Yet our digital identity is fragmented. People have to maintain numerous accounts, remember countless passwords and enter personal details repeatedly into the databases of companies, governments and banks which are then stored all over the world.
This is messy, confusing and sometimes dangerous. Identifying the wrong person requiring medicine, or operating on the wrong person, might harm two people, yet being too easy to identify can breach privacy or allow someone to spend your money using stolen bank account details.
Digital identity relies on data to help organisations understand how likely we are to be who we say we are. Digital identity can also help us assert our data rights, such as the right to move data about us from one organisation to another.
Can our ‘data thinking’ help make digital identity simpler and more effective while managing the risks that it brings?
First we need to understand the basics of digital identity. This piece examines what identity in a digital age means and the challenges and risks that digital identity systems will need to tackle.
What is identity?
Identity means many different things to different people in different contexts. Everything from physical body and characteristics such as eye colour, face, demographic information such as age or race, to government issued ID is or contributes to identity. Identity can be shorthand for your sense of self or an alias you’re known by in specific communities. It can also be a role, a job (‘the Pope?’), login details, or even your online reputation. In most cases the thing that often counts as your ‘identity’ is something unique – the thing that distinguishes an individual from a group.
Where might we need to use a digital identity?
As more people use online services, they need a digital identity to:
- get a bank account, mortgage or loan
- apply for a visa or confirm eligibility to travel abroad
- set up a business or online shop
- assert people’s rights over data about them, for example the right to data portability
- use specific services online (some applications have some kind of age requirement legally – eg must be 13+ to use social media services, 18+ to access adult content in many countries
Delivering digital identity systems is hard
Organisations want to know the identity of their users so that they can manage risk, meet legal requirements, increase trust. They also want to understand users better to improve (and sell more) product and services.
When transactions involve personal data (and especially sensitive personal data) companies have to comply with data protection regulations. GDPR aims to protect against personal data exploitation and encourage responsible practices to reduce risk of data breaches and privacy infringement. Know Your Customer (KYC) and Anti-Money Laundering (AML) legislation help to manage the risk of fraud, identity theft and money laundering for the benefit of citizens, businesses and wider society. GDPR, KYC and AML are generally seen as compliance costs for businesses. The ODI has previously argued that GDPR creates opportunities for innovation – perhaps KYC and AML create opportunities too.
Digital identity is used to establish trust, so often needs data about us from authoritative sources
In digital transactions, along with legal requirements, companies and services often want to uniquely identify their users or validate something about them to establish trust and manage risk.
Why is trust important?
If an entity can identify or discover things about you, they can better establish how high a risk you are to buy from/sell to. Entities can assess your eligibility (eg citizenship, age, valid driving licence, etc), reputation (eg a credit rating or criminal record) or the reputation of people grouped in the same category as you. This helps protect services from losing money, goods, reputation, or other valued assets.
If an entity or service provider has a verified unique identity, you are disincentivised from causing harm: you are identifiable and therefore penalties can be applied (eg your driving licence could be revoked or you could be barred from services in the future). This is a type of ‘insurance’ against the risk of dealing with unknown persons.
Data about us is used to establish trust
We think trust is essential to create an open and trustworthy data ecosystem. In the case of digital identity, verification comes either from an authoritative source or from a resilient reputation system which isn’t easily gamed. It’s not enough to simply assert things yourself. So when we want to enter into a transaction, the provider seeks to verify something that isn’t simply user generated, for example that you have a valid credit card, address, passport or account in good standing. Therefore identifiers, attributes and verified claims play a major role.
Attributes are established using some form of evidence. The validity of the evidence determines how meaningful the attribute is. For evidence, we care about provenance and quality. For example, passports and birth certificates provided by the government are highly trusted sources of evidence for date of birth and age. However, it’s impractical and unsafe to use these for all services that want to know our age.
Trust can be transferred from one service to another. For example an online service will allow us to purchase items using our bank accounts, which have been set up through a rigorous passport-checked verification process, removing the need for the online service to check the validity of the source of funds
Some attributes can be self-asserted, but organisations often need identification to be verified by objective data.. Mortgage brokers don’t just want my assertion that I have income, they want proof from a source that they trust more than me – for example my bank or employer. Very occasionally, services are entirely based on trust without external verification – eg this project that provides microloans to help people out of poverty does not require any supporting documentation.
Risks around digital
Digital transactions carry additional risks compared to in-person transactions. Firstly, since the user is not physically present, it’s easier to be anonymous or to use a false or stolen identity. There are two elements to this: verifying the identity is not fake; and then verifying that the person using the ID is not fraudulent . This makes it easier to act fraudulently online.
Secondly, technology allows for large-scale programmatic attacks on consumer’s data and transaction details, leading to data breaches and greater risk of impersonation. Equifax, Facebook, LinkedIn, Yahoo, Ebay and numerous companies have all been involved in data breaches with leaked identity consequences.
In offline transactions, the risk is almost exclusively with the seller/person offering the service – yet with online transactions the risk is two-way: we still have the risk on the seller/provider but there is also the risk of the identity/personal data of the buyer being stolen in the process.
Ethical and practical concerns with digital identity systems
There is a range of ethical and practical challenges around digital identity and attributes.
How do we protect privacy and anonymity? Would ubiquitous identity systems that require us to use the same unique identity for every online service and with every thought we post online stifle freedom of speech, self expression or whistleblowing? What might the consequences of being able to connect all our online activity to our real world self be?
There’s also the problem of exclusion. Most digital identity systems don’t work for everyone – the UK government’s GOV.UK Verify service currently has a success rate of 45% – and risk alienating people who don’t own the correct documentation, aren’t online, can’t afford to use technology, don’t want to use technology or don’t understand how to use digital identity.
There are risks around the use of identity systems to exploit or control people. These include concerns about at-scale surveillance, profiling, algorithmic decision making and discrimination.etc.
Practically, any identity system has to be secure and robust enough to handle bad actors, fraud and abuse yet still user friendly and not too burdensome. This is a difficult balance to strike. Any new system faces the difficulties of adoption and on-boarding. Network effects, whereby a service gets better the more people use it, often work against new entrants. There are also legal requirements and standards to meet, as well as robust security and encryption requirements which many data-holding companies have regularly failed to do adequately.
We are interested in which offerings and approaches exist today; what are good common principles for a digital identity system; and who is best placed to deliver services that work for users, protect individuals and organisations and meets legal requirements.
We’re working on a report to further explore the implications and challenges of digital identity, and how some existing providers attempt to meet them. Please get in touch with Waverley Coquet if you’d like to contribute!