UPDATE: We had a great chat last week. Not surprisingly, we started talking about Umati, but the conversation quickly spilled over a into a number of important related issues. This is great, and it’s exciting to see how much interest and focus is already turning towards responsible data.
There are a lot of conversations moving now, and we expect this will lead to a closer investigation on how academic resources (derived potentially from IRB and certification processes) can be used to support activists and programmers, and generally about how to make ethical research questions relevant to a wider community of advocates and development workers. It looks like a Technology Salon will also be organized on “Ethics in Development,” which will address some of these issues, and Internews is also looking a doing a mapping of what resources are already out there.
It’s great to see all this activity, and we can’t wait to see it converge and coalesce There are bound to be many more conversations and we will be doubling back soon with some of our notes and thoughts on the Umati project, but in the meantime, you can find the YouTube video of the below
The Nairobi-based iHub has played a leading role in developing and thinking through the interface between technology and the crowd, and we are very excited about the opportunity to learn from their experiences on the Umati project. Umati is a media monitoring and reporting initiative that identified, collected and published instances of online hate speech surrounding Kenya’s 2013 elections.
Umati used five individual media monitors (representing each of the main ethnic groups in Kenya) to track and cite incidents of hate speech and dangerous speech on selected media platforms (Twitter/Facebook, blogs and comments on online newspapers). These incidents were then forwarded to the online mapping platform Uchaguzi, to prevent against further harm. In addition to preventing and understanding how online hate speech could lead to violence, the project also aimed to pilot a new methodology (more on the method here).
The project had some interesting findings. For example, dangerous speech was most commonly associated with identifiable commentators, and most commonly on Facebook (compared to the other media platforms monitored). But given Kenya’s recent history and anxiety regarding election-related hate speech and ethnic violence, this project also raises a lot of important questions for us and our work on the Responsible Data Project:
Does publicizing hate speech pose risks to speakers, communities, or researchers? If so, how can those risks be identified and what kind steps can be taken to minimize them?
How to encourage responsible use of socially sensitive data, and ensure its release doesn’t exacerbate tensions?
What role, if any, should informed consent play in media monitoring and peace promotion programs?
How important is verification for projects that are tracking subjective data and social dynamics?
These are hard questions that can perhaps only be answered in specific cases. We know that they represent important issues for the iHub: research team, and we look forward to talking about what they knew going in, what they learned and what they would have changed if they were to do it all again.
If you are also interested in these issues, join us. The hangout is open to all and all questions are welcome.