How can an app developer make sure that an app doesn’t do more harm than good? For Amnesty International, that question could be one of life or death for human rights defenders using their new Panic Button app.
Over the last year, the Technology and Human Rights team at AI have been developing a mobile app that responds directly to the high risk scenarios that human rights defenders face. Built for Android, the Panic Button app embeds a hidden function that, when activated, will send a distress signal via SMS to three trusted contacts in an emergency. Built specifically to respond to the high risk of detention faced by human rights defenders (HRD), the development team have spent a year creating and testing a prototype, which received grant of £100,000 in the Google Global Impact Challenge this summer. Having released a functional first version of the app, the project team are currently planning three pilots with HRD networks in Asia-Pacific, Central America and East Africa. You can read more here about what it does and who it’s for.
Unfortunately, the Panic Button is not a cure-all for threatened activists. In some cases having a phone with GPS enabled can pose a bigger threat (it reveals your location to your telecommunication company who may be in league with an adversary), than the one Panic Button aims to fix (you have been detained or abducted and no one knows your location or that you’ve been seized). Critically thinking about when and how a human rights defender should use Panic Button should be part of a larger process of understanding the threats and risks an individual may face. That process of risk assessment is the substance of trainings, exercises, curriculum, group discussion, even entire books. So how does a human rights defender who has access to the Panic Button app know if it is smart for them to use it? And if they decide to use the app, how do they know the best way to incorporate it into their work to get maximum benefit? And how does AI make sure that users have enough information to make smart choices about using the app?
These are tough questions.
To explore how to approach them and what is possible with ‘in-app’ risk assessments (a walkthrough in the Panic Button application for users to self assess their risk when using the app), AI hosted a workshop at its secretariat. The engine room helped with the framing, facilitation, and invites, bringing together a human rights defender involved in testing the app, digital security trainers, media development trainers, user experience experts, and AI team members to tackle the problem and scope out solutions during a one-day workshop.
The challenge was to design a guided risk assessment within the app that would have the dual goal of improving user decision-making around security by HRDs and increasing the chances of the app being effective. Risk assessments are ideally carried out well ahead of any action, and should guide users through the potential risks that may arrive and how they might best prepare for them.
Often risk assessments are guided by trainers, who can explain jargon or give examples of possible scenarios. While there can be huge advantages in a trainerless self-assessment tool (especially for remote or isolated activists), there is a danger that the language and examples used within the in-app assessment may not be accessible or applicable, and could be a dead end for users. While the app is primarily targeting an active, informed user group of HRDs, it also needs to be accessible to those who are new to activism or are thrown into high risk action ‘overnight’ by sharp turns in policy or governance. So at every stage developers need to consider the scope of the process and ask themselves: ‘Who is this for? What level of risk is this dealing with?’
Rather than considering this process in a vacuum, the group were given potential user cases and incident scenarios that would allow them to consider the use of the Panic Button and help to guide the development of risk assessment function.
One of the user cases was a female HDR from the DRC and the well-known head of a women’s rights organization based in Goma, documenting abuses by both government and rebel forces in regions where territorial conflict is active. As head of the organization, she travels widely, sharing information and advocating for justice, and regularly faces threats crossing checkpoints controlled by various armed groups. On a particular trip to Rwanda for a conference, her car is pulled over by police, allegedly due to an issue with her Congolese license plates. While the police speak to the driver, she is pulled out of her car and into another vehicle.
She is traveling with her Android and she activates the Panic Button from the back of the car, as she is taken to a local police station and questioned. She thinks her mobile has some credit (at least for one message) but she’s not sure as she doesn’t use the smart phone very often.
When the group discussed this case, it was clear that in order for the app to be useful in the moment of an attack, HRDs would need to prepare extensively. She would need to agree on protocols and procedures with contacts for what happens when the Panic Button is activated. She would need to be ready for the moment.
With this emphasis on preparation, the workshop group discussed how to structure a clear-cut preparation process. We were able to map out key steps that activists could follow in order to make sure that at the moment they pushed the alert they would be able to spark a chain of actions that would bring them meaningful assistance as rapidly as possible whilst also warning others in their network.
This narrowing of scope, from how to assess your risk to how to effectively prepare to use Panic Button made it possible to start designing possible models for the in-app risk assessment. By the end of the day the group had sketched and agreed upon two plausible formats, a rudimentary design of possible user interfaces, and the scope of questions and threats that an in-app assessment can address. As the app itself is piloted and disseminated, information to help individuals prepare for the moment when they might need to activate Panic Button must be available – including to those who are out of range of trainers.
Thanks to Libby Powell of Radar for her help in documenting the event, sharing her photos, and helping write the post. The post is also published at NDI Tech’s blog, Demworks.
And for more on our collaborations with Amnesty International, see this post about our work to better understand how AI researchers are using technology so the AI network can provide effective support to its research team.
Update: We’re thrilled to learn that the Panic Button has been included in the 2014 Nominet Trust 100 most inspiring applications of tech for social good! http://socialtech.org.uk/projects/panic-button/
Congratulations to everyone involved!
More information about the Panic Button, and instructions on how to download it, can be found here: https://panicbutton.io/