Tech Bias, People Bias

Zara Rahman

Technology’s problems are not, and never have been, just about technology. In our work at The Engine Room, we see how problems arise not ‘just’ due to technical mistakes but because of very human decision making, whether that is a human trusting a machine over another human; poorly executed data analysis; or, often, technology being built to reinforce human prejudices. When we frame technology’s problems as technical or operational problems, I fear we’re not doing enough to acknowledge the intentionally harmful aspects of the human decisions behind them.

Biases, biases, everywhere 

The biases embedded within technical systems are often invisible to the untrained eye, and are often surprising to those outside of the ever-growing bubble of people concerned with ethics and technology. Perhaps unsurprisingly, we have a tendency to place an undue amount of faith in technology to be ‘objective’ or ‘fair’, even when increasing amounts of research and real-life examples show us that is very much not the case. 

Though we’re increasingly focusing on the biases within technology, we shouldn’t turn away from the human decisions that put them there. 

For every experience I’ve had of a technology not working correctly because of who I am, or who those around me are – biometric photos not recognised because of the colour of my skin, voice recognition technology not working for my parents because of their accents – I have ten times as many that arose from prejudiced people treating us differently because of who we are. Those human biases won’t go away with technical fixes, and it’s those human biases that both create the broader environment in which we all live and shape the environment in which technologies are created in the first place. 

When we talk about fairness or equity in machine learning, are we also talking about fairness or equity as lived by the people who exist in that space? When we dedicate time, attention and funding to the creation of processes that keep algorithms accountable, are we doing the same work when building processes to hold people accountable for their harmful actions? (Case in point among many: Epstein. Ito. Appelbaum. I’ll stop, but you get my drift.) 

Maybe it’s because of the limits of our imaginations. It’s somehow easier to focus on ‘fixing’ technology and data because we built it. It’s logical to think that if we built it, we can, theoretically, fix it. But addressing deeply held beliefs and human prejudices is so much harder. 

From tech-solutionism to tech-pessimism

Another reason for the focus on technology accountability rather than technology-and-human accountability is part of a larger trend of backlash against tech solutionism. While many people determinedly remain at the ‘tech-solutionist’ party, we’re seeing a shift towards widespread critique and tech pessimism–evidenced by upsettingly commonplace headlines like “Whatsapp Destroyed a Village”. (For what it’s worth, that article itself explains how Whatsapp didn’t destroy that village, a mob of villagers destroyed that village by beating five strangers to death.) 

The common thread between the two schools of thought is assigning an undue amount of agency to technology, instead of to people. 

What Whatsapp did enable in that situation was magnify existing fears, spread a lack of trust, and exacerbate prejudices. It hastened the speed and scale of the problem. The solution to the acceleration of social issues by way of communications platforms like Whatsapp is never going to be located in a technology tool. Instead, it is in the social fabric of the village–the accountability and enforcement processes, the way in which we trust and see others. 

It’s undeniable that technology can scale up existing problems, yet I can’t help but think about the far-reaching consequences of people’s prejudices, too. UK Politicians choosing to create a ‘hostile environment’ for immigrants is drawn from their xenophobia, and the way in which it displayed itself was a combination of policies and technologies. We could resist building those technologies and systems, and we can push to change the policies. But ultimately we need to challenge that institutionalised xenophobia, or at least create a system where people with those beliefs do not have the power to destroy so many lives with their prejudice. 

We need to address those prejudices. It’s a long-term project, and one that starts with introspection. How do our own behaviours strengthen –or destroy–the systemic inequities around us? How do we display–or question–our own prejudices, not just in the tech we build, but in the way we treat those around us? 

Until we address our own and each other’s prejudices and ethics–not just in how they play out in technology, but in our communities, too–any effort to unite “tech and ethics” will fall short. There’s no such thing as “unbiased” technology, but there could be processes and systems in place for holding ourselves and each other accountable for our behaviour, which would be a solid first step to ensuring that negative impacts are mitigated and ultimately, reduced. 

Image credit: Fernand De Canne

MORE