In 2018 we worked with Oxfam to publish a landmark report on the use of biometric data – fingerprints, iris scans, voiceprints and so on – in the humanitarian sector. Our report looked at how these types of data were being collected and used, and raised critical questions around potential risks and harms.
This year, with the support of Open Society Foundations’ Migration Initiative, we are excited to be starting new research that will build on this foundation by taking stock of developments in the sector and looking into new and emerging evidence of harms and benefits. Through this new research, we hope to support a more justice – and evidence – based approach to using biometrics in humanitarian work.
Join our first community call on 19 April, 10 AM EST/ 4 PM CET
If you’re a humanitarian practitioner or just interested in biometrics and responsible data, please join our upcoming Community Call, where we’ll be introducing the project and hearing from practitioners on the theme. Register for the call.
In 2021, the humanitarian and development sector saw several worst-case scenarios related to the management of biometric data come to pass. In Bangladesh, biometric data collected by the UNHCR from Rohingya refugees was shared with the Myanmar government – the same government that was responsible for their displacement in the first place. In Afghanistan, on the eve of Taliban taking over the country following the withdrawal of US forces, organisations scrambled to destroy information collected on those served by US programmes. We know that the Taliban took charge of biometric databases left behind by the US forces, possibly endangering thousands of people.
We have also seen cases such as the hacking attack on the International Committee of the Red Cross (ICRC), which compromised the personal data of over 500,000 people. While there was no biometric data in the compromised database, this case showed that even responsible actors are vulnerable to breaches and attacks – and that humanitarian organisations are a target for malicious actors.
While the above cases have done much to raise awareness of responsible data issues related to the misuse of biometric data, far less attention has been paid thus far to the structures that make such misuse possible in the first place – like biometrics use policies or lack thereof – while unscrutinised claims by the private sector and funding organisations about the ability of emerging technologies to address social issues threaten to continue entrenching structures of harm within humanitarian responses.
We believe that building knowledge on these issues is fundamental to ongoing efforts to foster better practices in the humanitarian sector.
With all this in mind, here are some of the key questions that will guide our work:
- How is biometric data that is gathered by humanitarian organisations governed? (ie. how the data is used, transferred, and shared; and who has access to it?)
- What are humanitarian organisations’ policies on biometrics use? How have they been developed, and how are they applied in practice?
- What kinds of evidence exist on the benefits, harms and risks of biometric data use in the humanitarian context?
- How might existing regulatory frameworks support more responsible use of biometrics in the humanitarian sector?
We would love to hear from you!
As mentioned above, our first community call is on April 19 – it’s open to anyone who is interested in the topic, so feel free to attend even if you’d just like to come and listen.
And if you’re a humanitarian practitioner working with data and biometrics, or part of a community impacted by such schemes, our research team would love to hear from you. Please don’t hesitate to get in touch with Teresa at teresa[at]theengineroom.org.
Photo by Marek Piwnicki via Unsplash.