Technology in the service of justice

Julia Keseru

Last year was full of grief, challenges, transformation and new visions born out of struggle. In 2020, digital technologies helped and harmed us more than ever before, illuminating both what’s so appealing and what is distressing about our online reality. 

As the Mozilla Foundation noted in their latest Internet Health Report, thanks to the internet billions of people were able to connect with their families, work remotely, order food, and attend classes last year. 

At the same time, the growing reliance on digital tools raised questions about the dysfunctions and injustices of our societies. Due to the digital emergency driven by the pandemic, the unchecked power of major tech industry players grew even further, cementing business models that perpetuate old forms of oppression, surveillance and control. On top of all of that, nearly half of the world’s population still lack basic internet access, making it impossible for the internet to be a suitable access point to basic services. 

Amid these circumstances, it has become clear that digital technologies – and access to them – need to be radically reimagined to meet the needs, politics and principles of everyone, not just a privileged few. 

But reimagining digital technology systems involves individual as well as institutional decisions. 

In the sections below I will attempt to outline how The Engine Room aspires to approach our own technical choices with an explicit focus on justice and anti-oppression. I hope this framing is useful to other activists wanting to choose tech that matches their mission, though the definitions and frameworks offered here may be helpful for other sectors and groups as well. 

Defining justice-focused tech 

In line with our new organisational strategy, where we define justice and equity in relation to power, we have created the following framework to help assess the digital solutions we choose for our work. 

Justice-focused approaches to technology

Justice-focused technologies are computer-assisted tools, systems, methods and processes that intentionally weave anti-oppression strategies into all stages of their creation, maintenance and uptake. 

For makers of technology, an explicitly justice-focused approach means proactive investment in exploring how a proposed process or tool may recreate existing systemic injustices, and investment in mitigation plans that outline measures and actions to close the gaps of inequity. 

For users of technology, an explicitly justice-focused approach means prioritising technology solutions that were designed, created, tested and implemented along these lines, and deploying them in ways that safeguard against potential forms of systematic oppression.

This framework is by no means perfect or exhaustive, but we hold it in our minds to know what we’re ultimately striving for. We use it in our research (to guide our analysis), in our support work (to identify potential tools or approaches with our partners) and in internal decision-making around our technical tools (as an aspirational yardstick).

But then how exactly do we get from an abstract definition to concrete decisions? As a first step, we find that the following questions can help to ground decisions around either designing or choosing tech tools, or both:

  • Who has participated in the design process? Is it only dominant social groups, or did excluded and oppressed communities have a seat at the table too? 
  • How accessible is the tech for users with various lived experiences and abilities? 
  • Who is under- or overrepresented by the tech or the data behind it? What are the assumptions and implicit biases that have shaped its creation? 
  • How was consent sought from those who may be affected by the deployment of the tech or the data behind it? How is data being collected and stored, and who does it get shared with?
  • How safe and secure is the tech? Who might be harmed if it gets in the wrong hands? 
  • How transparent is the financing and ownership of the tech? Who owns it, who controls it, and how are decisions made at the highest levels? How transparent are those decisions and what mechanisms exist to hold those with decision-making power to account?

Luckily, there are a growing number of resources that can be used to navigate these complex questions:

  • Sasha Costanza-Chock’s book Design Justice and the guidelines and resources created by the Design Justice Network help center excluded, oppressed and marginalised communities in any design process (technological and otherwise).
  • Investment in multilingual services and digital accessibility can go a long way to ensure the engagement of diverse audiences and abilities.
  • Training resources like We All Count offer a variety of ways to learn about equity in data-driven initiatives.
  • Frameworks such as Consentful Tech provide helpful benchmarks when assessing whether technologies were built with consent at their core, or not.
  • Resources created by the Responsible Data Forum can help assess how data might harm even with the best of intentions, while the growing community of organisational security practitioners can provide concrete guidance on vulnerabilities within certain technologies.
  • Initiatives like OpenOwnership provide standards and principles on effective beneficial ownership disclosure, and Ranking Digital Rights offers guidance on achievable levels of compliance with human rights standards for tech companies.

As the Astrea Foundation mentions in a recent report, there is also a growing movement of activists (organizers, researchers, advocates and movement technologists) who view technology as a tool for liberation, working hard to shift the balance of power through gaining more access to the development, control, and ownership of technologies.

Justice-focused technologies in practice

In an ideal world, we would have ample justice-focused technologies from which to choose; however, the reality is usually much more complicated. In practice, the answers to these questions are often disappointing, even for the tools we ultimately choose. 

First, it is nearly impossible to find real-world examples of digital tools that check all the boxes of anti-oppression, justice and equity. For example, platforms with end-to-end encryption (a technology that is pro-privacy and anti-surveillance) are often used in right-wing extremist organising, because they have more robust built-in features around privacy than their counterparts. Messaging services like Signal and Telegram, for instance, saw a massive increase in downloads earlier this year, after the purge of far-right content from mainstream social networks and in response to growing concerns around Whatsapp’s privacy practices. But despite the ways that this technology has been used, strong pro-privacy design is nonetheless something we prioritise at The Engine Room as a way of protecting vulnerable communities and frontline activists. 

Second, activist groups usually will have complex internal needs and dynamic external visions, for which a specific technology may not exist. For example, a group fighting for disability justice may seek a platform that is accessible to users of various abilities, but that also has the most rigorous security considerations and is not sharing personal data with third party apps. That tool may not yet exist, but the group must still choose something on which to do their work. Dominant technology options can be practically unavoidable, either because of their specific function and reach (as is the case with social media platforms like Twitter and Facebook), or because their extensive resources have enabled the development of more accessible, robust and user-friendly features (as with Google, Slack or Zoom). 

Finally, specific interpretations of what should or should not be considered as justice-oriented tech will always reflect the realities in which the technology is deployed. For instance, choosing consentful tech solutions may be a priority for groups working to protect sex workers or survivors of sexual abuse, whereas for racial justice groups, knowing that Black people, Indigenous people or people of colour were involved in the design process weighs in more heavily. And while many of these factors will overlap, decisions around a particular technology should always be grounded in an understanding of context, lived experience and specific expertise.

Making justice-driving decisions around technology is challenging. This is just a taste of some of the questions we consider, the complexities we navigate, and the dreams we dream. In the upcoming pieces in this series, we will share more on our own (ever-evolving) selection process, as well as the mechanisms we use to support our partners with their own technical choices. In the meantime, if you have any questions or thoughts, don’t hesitate to reach out at julia@theengineroom.org

Image by Jr Korpa via Unsplash.

MORE

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.