This piece was co-authored by Paola Verhaert and Madeleine Maxwell.
When we talk about informed consent, we talk about upholding dignity for individuals and communities involved in the request at hand – whether that’s with researchers, governments or aid organisations. In recent months, amid data breaches and stories about INGOs requiring that refugees share their biometric information, the once-lauded framework of informed consent, particularly within humanitarian contexts, is increasingly proving to be insufficient.
At the centre of the discussion is the question of whether communities purportedly being offered “informed consent” actually have the space to exercise agency and meaningful choice over how their information is collected and used. From what we’ve seen in our digital ID research (full report forthcoming), the answer is no.
What’s informed consent, anyway?
How we define informed consent varies in different fields and contexts, but it typically consists of: the voluntary disclosure of information (by data collectors), a clear understanding of how this information will be used, the capacity to make the decision in the first place and the consent itself. As Danna Ingleton put it clearly: “Recognising the agency of the people that we work with means centralising the experience of those people in our work.”
The term ‘informed consent’ was first brought to the table in the 1950s, with further discussions around informed consent taking place in medicine, law and philosophy in the 1970s. However, the emergence of the concept of informed consent was paired with criticism of its implementation. Discussions centred around the fact that evidence of informed consent, usually a signature, may not be enough – just because someone has signed a form, does that mean they truly understand everything they are agreeing to?
What we learn from these early discussions is that the difficulty of implementing informed consent is not a new challenge, though recent advances in technology and data-gathering practices have amplified these issues.
It’s not about the tech, but the tech is making it harder
The advent of new technologies means that the volume and sensitivity of the information being collected and shared, as well as the speed at which information is collected (and the corresponding risk attached to potential data leaks or breaches) is higher. The rapid emergence of new technologies has heightened and widened concerns about the validity of obtaining informed consent in a number of ways:
- More data, and more granular data, is being collected. For example, the UN World Food Programme’s SCOPE system counted details on 20 million people in 2018, with plans to increase this number. Organisations and institutions are also collecting more fine-grained data (e.g. more detailed and granular data points), with higher levels of sensitivity (e.g. biometric data). While contested, the use of biometric verification systems has seen various pilots and implementations in the last few years.
- It isn’t always clear how new technologies, and in particular, proprietary software, hold and share the data. And even without full data being shared, significant risks arise from sharing metadata.
- The consequences of how the data is used might spread far beyond the initial use case due to the scale of digital technologies, making it difficult to know what one is agreeing to.
- Data sharing becomes (almost too) easy: data can be shared so easily between different groups and at scale, that scope creep is par for the course (eg. data collected for one purpose then shared in a database with the police).
- Individuals’ data being gathered and identified with other people’s means that new risks arise from eg. community-identifiable data (and consequently, threats to freedom of association), that people can’t be expected to foresee or expect.
On a daily basis, humanitarian organisations rely on technology for the collection and use of information on the communities they serve. But seeking consent–in a way that ensures those providing consent actually have the information, agency, control and alternatives they need–is a key part of using data responsibly: who is reflected in the data they are collecting or using, what does this person know about the practices with their information, how are they informed? Can they say no?
Gathering consent has been a way of addressing power asymmetries at play between those doing the collecting, and those from whom data is collected – trying to protect against abuse of power, and ensure that the least powerful still have control over their data. But if the process for gathering consent does not, in fact, adequately address that asymmetry in a meaningful way, then what would?
Recently, researchers have worked to shed light on the experiences and contextual interpretations of informed consent within vulnerable communities. Dragana Kaurin conducted research into the collection and use of refugees’ personal data and found that informed consent is rarely sought. When conducting research with migrants and refugees arriving in Italy, Data & Society noted that “there is a lack of meaningful informed consent in the collection of information from migrants and refugees,” and that, consequently, migrants may not be truly giving meaningful consent due to cultural differences, knowledge gaps, or power inequalities.
In our forthcoming research on lived experiences with digital ID systems, we found similar problems. Many of the refugees we spoke to were not asked for their consent during biometric registration processes, but it was also clear that standard approaches to informed consent are not effective in an environment with vast power imbalances. If there is no way to refuse, then consent is not meaningful.
As Helen Nissenbaum has written: “The farce of consent as currently deployed is probably doing more harm as it gives the misimpression of meaningful control that we are guiltily ceding because we are too ignorant to do otherwise and are impatient for, or need, the proffered service.”
Guiding questions for the future
This leaves us with some difficult questions: If informed consent is not fit for every situation, what should we advocate for in its place, or add to it? How can we re-imagine informed consent in a way that upholds the dignity and rights of the communities we serve? Where can we look for examples of radical and effective transparency in complex systems? And, if we go beyond consent, what would a rights-respecting approach to collecting data look like?
We look forward to exploring these questions further and would love to hear from others on how discussions of consent are affecting your work.
Initial reading list on informed consent
- ‘Aiding Surveillance’ (Gus Hosein and Carly Nyst, Privacy International)
- ‘Ethics, technology and human rights: navigating new roads’ (Danna Ingleton, openDemocracy)
- ‘Rethinking Informed Consent in the Digital Age’ (Linda Raftree, Danna Ingleton, Emily Tomkys)
- ‘The Data Seesaw’ (Amos Doornbos)
- ‘Data Protection and Digital Agency for Refugees’ (Centre for International Governance Innovation, Dragana Kaurin)
- ‘Improving consent for (Design) research’ (Georgina Bourke, Projects by If)
- Gender At Work Podcast Series, Episode 4: ‘Reimagining Consent, Pleasure and Danger: Why and how do we need to reimagine ideas around consent, pleasure and danger?’ (Gender At Work)
- ‘Framework for consent policies’ (Responsible Data Forum)
- Indigenous peoples and responsible data: an introductory reading list (Responsible Data)
- ‘Stop Surveillance Humanitarianism’ (Mark Latonero, New York Times)
- Delivering Informed Consent: Lessons From Mae La (Devina Srivastava)
- ‘Digital Identity in the Migration & Refugee Context’ (Mark Latonero, Keith Hiatt, Antonella Napolitano, Giulia Clericetti, Melanie Penagos, Data & Society)
- ‘Stop thinking about consent: it isn’t possible and it isn’t right’ (Harvard Business Review, paywalled)
- ‘The Consentful Tech Project’
Image credit: Thomas Drouault on Unsplash