There’s no way around it: data is all the rage.
There are good reasons for this. Data is easier to get and cheaper to generate, and there are now a number of studies and anecdotes indicating that evidence has a unique potential to support policy change, cultural change and accountability. There are also a lot of great consequences for accountability work in general. The rapid proliferation of platforms and data-driven strategies has resulted in a lot of innovation, a lot of which is very visible, and there are opportunities for skill sharing and learning across contexts like never before. Open Knowledge Foundation’s school of data is an excellent indication of how learning in this area wants to scale and distribute.
But data has a dark side too. Chase it far enough and almost all advocacy data will end up being about people. More often than not it is data about the behaviour and affiliations of people on the weak end of power imbalances, often people with little awareness and no control over how that data can be used.
As surveys, citizen reporting platforms and evidence-based campaigning become increasingly common for grassroots advocacy, this creates a host of new questions about how that data is generated and maintained – ethical and strategic questions about privacy, security, relationships and power. And it is not just a function of explicit data collection, either. Simply through the use of common tools for communication and project management, advocates generate a tremendous amount of data about themselves and their constituents. Everything from member databases, to email logs, to what Tactical Tech calls digital shadows – it all leaves a trace.
It’s one thing to say that people working for social change and more accountability should think carefully about these issues. We would like to go further and say that working for social good conveys a responsibility to proactively address these issues.
It’s not just about people implementing projects though. This responsibility goes all the way up the value chain for development and social good programming. Tune your ears to the hushed tones at a major conference coffee break and you might hear stories about the western donor whose insecure email thread got a grantee kicked out of the country, about the trainer who had their participants list confiscated at customs, for prompt use by the secret police. There are a lot of anecdotes floating around, and not quite enough understanding about how different groups experience the challenges and what they need to better meet them.
We are starting the Responsible Data Project to meet this need. We expect that at the end of the day, this will involve producing small, flexible tools that are easy to use and speak to the specific needs and decision points faced by activists, support organizations and funders working with data for social good. We’re reaching out to a number of people in the funding and support communities to help us think through this. We are also very excited to be teaming up with Tactical Tech to dig into what this all means for the activist community.
We will be co-facilitating the data security track at Tactical Tech’s Info Activism Camp this June, and expect that the amazing collection of activists they have convened, combined with TTC’s experience and strong approach to learning and sharing will make an excellent lab in which to start better understanding how activists relate to these risks, and what they need to better meet them.
We spend a lot of time scheming with local partners about how to make their tech strategies safer and more effective, and are strong believers that any strategy needs to be grounded in individual contexts and existing workflows. Addressing responsible data is no exception, and so as a starting point, we are thinking a lot about decision points in campaigning and data collection. We want to look closely at how questions about data, privacy, ethics and security can and should be addressed at each of these points, to meet objectives, while fulfilling the responsibility to protect and promote team, partners and communities.
Here are some of the decision points (radically generalized) as we see them now.
- You collect operational data: when you use technology to communicate with your team you generate data. What should you consider about this data?
- You have a strategic objective: can data help, if so what kind and where will it come from?
- You know what kind of information you need: how do you get the data in a way that ensures it is reliable, credible and useful?
- You have data: how do you keep it safe and how do you keep it useful?
- You have data you want to share: who do target and how do get it out there?
- Shit is hitting the fan (office raids or mass arrests): how do you assess your points of weakness and what you need to do to keep people, work and information safe.
At each of these decision points there will be questions about ethics (informed consent, forecasting community dynamics and relationships to power), about methodologies (when is the credibility of representative samples or academic methodologies more important than fast and powerful data?), and about security (identifying and modelling threats, balancing norms for open with privacy concerns, understanding data dual use). We are pretty sure that by getting better at addressing these issues along all the key decision points in planning and implementing data-driven projects, we can help activists foster better relationships with communities, enhance the influence of their evidence, and work to protect everyone along involved along the way.
We’ll be working closely with Tactical Tech in the next few weeks to think about how to morph this into something useful for everyone at the camp, and for everyone else too. If you have any thoughts, or want to discuss, don’t hesitate to reach out. We’ll be blogging more about this soon.