This post was written by Tom Walker and Fieke Jansen, with input from Rachel Rank and Lorena Klos.
For many human rights funders, communicating clearly about the activities they fund (and why they are valuable) is an important part of their work. Indeed, funders often favour transparency on principle as an inherent good. How can they balance this with the responsibility to avoid harm, both to their grantees and the people their grantees they aim to help?
With Ariadne and 360Giving, The Engine Room is starting a research project to look deeper into these issues, supported by Digital Impact (part of the Digital Civil Society Lab at Stanford University).
We’ve been interested in the topic since 2014, when we wrote a report based on interviews with 28 staff in 14 different grant-making organisations, and it’s time for an update. We’ll be talking to a wide range of people, with the aim of finding out what they need to know to make strong, effective decisions. Before getting started, we’ve been looking at what we already know about funders’ data-sharing practices through a series of blogposts.
The first looked at how and why funders’ are sharing grantee data publicly and with other funders. This post, the second of two, discusses how funders are publicly describing their approach to the responsible data implications of sharing grantee data, as well as steps they’re taking in response to those issues.
Challenges to sharing data responsibly
The mapping exercise raised a number of challenges that funders seeking to share data face, including adapting to closing civic space, balancing promoting transparency with limiting harm to grantees and collecting informed consent.
Transparency in the context of closing civic space
In many countries, civil society organisations’ space to operate is diminishing rapidly. According to the International Center for Not-for-Profit Law (ICNL), in 2015-16 governments worldwide adopted 64 laws, regulations, and other initiatives to limit the operations of non-governmental organisations, including by restricting foreign funding or threatening to de-register them. Funding organisations with a prominent founder may also be subject to public criticism on the basis of their grant-making decisions. As the Transparency and Accountability Initiative’s 2017 report Distract, Device and Detach explains, governments are increasingly attempting to discredit civil society organisations by focusing on any foreign connections they may have. Could publishing data demonstrating that an organisation has received money from a foreign funder thus increase the risks faced by a grantee?
As well as targeting organisations legally, is there a risk that adversaries could use data shared in grants to surveil or physically target individuals or organisations? This mapping exercise found no documented cases where a grantee or the people it seeks to help were physically targeted as a direct result of data published by a funder. (If you know of any, let us know.)
Although this could be because there are no such cases, it could also indicate the difficulty of drawing a direct connection between a funder’s data and individual cases of harm. Nevertheless, the human rights funding community has a responsibility to use, share and publish data in a way that avoids causing harm. In particular, even when data has been de-identified, it may be possible for adversaries to re-identify individuals or organisations by combining it with other datasets. How regularly are funders thinking about these issues?
Publishing data in volatile environments
Political environments and attitudes to funders can also change rapidly, making it harder for organisations to accurately assess future risks. As the Oak Foundation’s Head of Communications puts it in a 2015 blogpost:
“Words [from published grant data] interpreted as appropriate language one day can be interpreted as anti-development, aggressive, hostile, or illegal at a later date…and this is hard to predict.”
In one example of this trend, in 2015 the Indian government cancelled the registrations of almost 9,000 civil society organisations that had received foreign funding, as well as publicly placing funders such as the Ford Foundation on a ‘watch list’. In such changing contexts, grantee data considered to be innocuous when first published could become more sensitive, potentially increasing risks to grantees as a result.
Transparency vs do no harm?
According to research by SIMLab (supported by a previous Digital Impact grant), some grantees are concerned about pressure from funders to share data:
“One former implementer told us that the pressure to provide funders with granular data for accountability, decision-making or just ‘openness’ was overwhelming, ‘but it’s hard to make sure it won’t get leaked, hacked or casually shared.’”
This also highlights a potential tension between funders’ desire for transparency and openness – described in our first blogpost – and the principle of ‘do no harm.’ The report highlighted that the responsible data principle of ‘data minimisation’, which involves only collecting data that is needed for a specific use case, “is not yet culturally ingrained in all stakeholders in the RD system.”
This tension is also visible in rhetoric around data-sharing. In 2015, the IATI secretariat complimented USAID’s approach to publishing data, noting that it would “result in USAID publishing…the maximum level of compliance possible to continue to adhere to privacy laws.” Is there a risk that a focus on legal compliance could take precedence over a considered assessment of risks? We hope to look into this question in more detail during in the project.
Writing in 2015, Thomas Carothers suggested that funders may be “gravitating toward a kind of ‘transparency lite’ approach — being quite transparent about the specifics of their programming in contexts where they are not facing closing space, but selectively reducing available programme information in restrictive environments.” This project will aim to find out if this trend has continued, and any implications that this may have for funders’ approaches. As mentioned above, even a country that has space for civil society today can become a closed or contested space tomorrow.
This mapping exercise found only limited publicly available information about funders’ processes when asking grantees for consent to share or publish their data. This raises a number of questions about how consent is implemented in practice. When and how are grantees informed on the potential sharing of application and organisation data? As discussed at a Human Rights Funders Network event in 2016, can grantees opt out of sharing information without negative consequences? As Danna Ingleton and Keith Armstrong, reporting back from the event, summarised the concern:
“Will a prospective grantee view a question marked “optional” on a grant application as genuinely optional, or will they feel compelled to answer it because they think it will improve their chances of receiving the grant?”
Separately, there is also a question of responsibility Who is responsible for ensuring that personal data is not published without consent: the individual funding organisations or the platforms that host the data?
Several funders have publicly described how they have set up processes to address this: for example, the Oak Foundation states that before grant descriptions are published publicly, they are reviewed and approved by the partner themselves. This process was set up partly in response to a 2011 incident in which Oak mistakenly published descriptions of grants that staff used internally (rather than the edited versions included in annual reports) as part of their grants database, provoking criticism from a blogger that the descriptions were “ambiguous and inflammatory.”
360Giving provides guidelines for asking grantees’ consent to publish data, and examples of how to present such requests. We’re interested in hearing about any other examples, and looking forward to finding out more about how other organisations’ processes work.
How can funders and grantees assess the risks of sharing data?
In the Human Rights Funders Network’s 2016 event on responsible data practice, discussion focused on the need for grant-makers to evaluate the risks of sharing data more accurately. On the basis of this mapping exercise, it is unclear if judgements about what data is ‘sensitive’ are usually made by individual programme officers or programme heads in funding organisations: during this mapping exercise, we found several instances where this was the case. Does this risk relying too much on the accuracy of an individual’s interpretation of a grantee’s situation?
Trade-offs: Publishing funding data responsibly may limit its usefulness for analysis
As the UK Data Service (which, among other things, archives data collected by civil society organisations) points out, removing some data may make overall datasets less comprehensive, and thus less useful for analysis. For example, although a dataset might indicate that human rights groups working on a specific topic are underfunded, in reality several funders may have intentionally excluded them from a dataset because of responsible data considerations. Initiatives collecting and aggregating data in this way often deal with this by explicitly stating their methodology and highlighting limitations in the data sample. We’re looking forward to hearing if other organisations are thinking about these issues, too.
How are funders trying to share grantee data responsibly?
Human rights funders need to balance a number of issues when sharing data, including mitigating potential harm to an individual or grantee, safeguarding the reliability of the data when published, and promoting the sharing of data whenever possible. This mapping found that indications that funders are thinking about these issues in a number of ways:
Some funders may be collecting less data from grantees. On the basis of interviews conducted in 2015, the Carnegie Endowment suggested that “some funders are discussing internally whether they should…reduce [grantees’] vulnerability to monitoring by hostile security services, by collecting less detailed information about them [and] asking grantees to report in less detail about what they do, and conducting fewer site visits to grantee organisations.”
Other funders may be publishing or sharing less of the data they collect. Funders may decide not to publish or redact grantee data by stripping out specific data fields (such as grantee addresses) or choosing not to publish selected elements of data. For an example of this approach, the Advancing Human Rights Initiative asks funders to list grantees working on sensitive projects as “anonymous,” and requests that they only disclose information that would not compromise the grantee’s safety. Several funders chose to submit all grants data anonymously. Notably, the initiative also said that it does not publicly share aggregate data (such as country-level trends), perhaps because of concerns about the uses other actors could make data in this format.
Some groups create (or follow) guidelines for publishing grantee data responsibly. Some organisations are responding by creating policies or guidelines. For example, 360Giving’s guidelines to funders publishing data advise that: “There can be cases where grants data is sensitive for reasons other than privacy. For example, the address of a women’s refuge might be inappropriate to include in data about a grant to that organisation.” In a small number of cases, the funders themselves provide guidelines: the UK-based Big Lottery Fund gives grantees advice on how to submit data to them in line with national data protection legislation. The Digital Impact Toolkit hosts a draft data sharing policy, while the Responsible Data community’s Handbook includes a section on sharing data responsibly.
To take another example, organisations publishing to the IATI Standard that want to exclude some data are required to state their reasons for doing so in a public exclusion policy. In its guidance, IATI states that funders may not wish to publish details of their work and the areas in which it takes place “if your organisation carries out works which could be considered by states or societies as illegal or unacceptable,” or “if you are working in sensitive geographic areas.” Funders listed in the IATI registry adopt different approaches to this. Some publish exclusion policies stating that, in general terms, they will not publish data for programmes working on issues that are considered sensitive; others say that they will exclude specific types of information, while still others include no explicit exclusion policies.
In the next stage of this project, we will be conducting surveys and interviewing funders to learn more. Do you have best practices, ideas or experiences to share? Let us know – we’d love to hear from you.