For the last couple of years, the engine room has been exploring how to measure whether technology for transparency and accountability (tech-for-T&A) projects actually have an impact. It’s an area filled with lengthy evaluation reports and endless discussions on methodologies, but few solid conclusions. Dip even a cautious toe into these murky waters and you’ll find a researcher complaining that the evidence available is ‘weak’ or ‘uneven, piecemeal and scattered‘.
(Credit: Yorlmar Campos and James Fenton)
So, why persevere? As it turns out, initiatives that are set up to capture the messy reality of what goes on during a project are often in a better position to build on their gains and learn from setbacks – and impress donors at the same time. The engine room recently published a hands-on guide for programme staff on this very topic, setting out 17 concrete steps for monitoring projects on the go.
There have already been a series of attempts to identify specific circumstances where T&A projects have an impact (see here, here, here and here for some longer reads). All of them call for a bigger, more solid evidence base, which could allow organisations to keep building the long-term pressure for transparency and accountability that’s essential to many of these projects. Now, the main push is to design projects specifically so that they produce useful evidence about the project itself and T&A projects more generally. A whole range of people are testing out how to improve the way we do things, but it can be tricky to work out who’s doing what. This post rounds up the main initiatives and identifies a few trends.
(Credit: Original Photo SJKen, modified by Maya Richman)
Measuring things better also involves thinking carefully about what ‘impact’ means in reality. For example, projects might not record developments in a way that reflects smaller, intermediate changes that could add up to a deeper shift over time. Programme staff don’t always think through or articulate their assumptions about outcomes when drawing up a theory of change – if, indeed, they write one at all. There’s also debate about how much impact we can reasonably expect – especially from projects in complex political environments – as well as calls to better understand a particular political context before deciding what ‘success’ there might look like.
We’ve noticed a few more trends in the projects detailed below:
-
Academics are working closely with the people who implement projects, designing interventions so that the evidence they create contributes to our general understanding of T&A initiatives.
-
Projects are increasingly trying to use methods that take account of the complex environment in which a project operates, with some mixing quantitative and qualitative methods and others using real-time assessments and other methods that allow them to factor in changing political dynamics.
Overall, we’re excited to see this issue being addressed from so many perspectives. Here’s our round-up:
Making All Voices Count (MAVC)
MAVC’s Learning and Evidence component, an applied and practical fund for research and learning on tech-for-T&A, focuses on answering the question:
‘Is closing the feedback loop between citizens and their government a catalytic force that enables better governance, enhances service delivery and strengthens democracy?’
The Institute for Development Studies (IDS), which is managing the component, is building an evidence base on technology and open government and has recommended a ‘solid meta-level review…as soon as sufficient evidence is available’. Four briefing papers were published in July, including a ‘selective, purposive’ review of recent work in this area.
What methods is it using?
The Learning and Evidence component will distribute research grants on specific themes (ranging from small, bounded case studies to broader multi-level approaches), and integrate them with the projects supported by MAVC. In this way, projects will be specifically designed so that they create new, useful evidence – while being informed by new ideas generated by the Learning and Evidence component.
(Credit: MAVC)
The component is reviewing existing evidence with input from a wide range of actors, holding a series of meetings, e-dialogues and discussions with groups involved in transparency and accountability while surveying academic publications and ‘grey’ organisational literature. The information will be accessible through an online Knowledge Repository (which is also inviting contributions from individuals and organisations outside the initiative).
Transparency and Accountability Initiative (T/AI)
T/AI is focused on four questions in relation to tech-for-T&A:
-
How and when technological innovations can be transferred to other contexts?
-
How is information used and consumed; how and why do citizens engage; and how can new technologies for T&A lead to empowerment, accountability and development outcomes?
-
How and when new technologies can perpetuate power inequalities?
-
How can we improve the diagnosis of context and theories of action?
(Credit: T/AI)
What methods is it using?
Members of TALEARN, a community of practice working on governance facilitated by T/AI, have long been discussing impact and evaluation. Rather than focusing on specific methods itself, TALEARN is collecting and systematising existing evidence about impact. It’s doing this in three main ways:
-
Building an online knowledge repository to consolidate and link evidence on impact: TALEARN’s Accountability and Participation (TAP) Nexus Practice Group, together with the U4 Anti-Corruption Resource Centre, is currently building an online Knowledge Repository that will hold practical information from projects as well as more formally structured information. It also aims to identify evidence gaps and methods for filling them.
-
Holding meetings for a wide range of people interested in measuring impact: TALEARN regularly holds meetings in which practitioners, donors and researchers can exchange ideas. The most recent of its annual meetings, in Jakarta in March 2014, looked at impact in some depth.
Publishing think pieces on new developments in the field: for example, one recent piece has assessed recent calls for transparency projects to ‘work politically‘.
Transparency for Development (T4D) Project
This five-year, $8.1 million multi-country research project is being jointly undertaken by Results for Development Institute (R4D) and the Harvard Kennedy School (Ash Center for Democratic Governance and Innovation). It’s examining community transparency and accountability initiatives and their effect on the health and social sector. But it’s even more ambitious than that: it’s trying to produce evidence to generate a broader understanding of the impact of T&A initiatives at large. It has specifically selected countries and types of interventions to help provide evidence that can be used to make generalised statements about where T&A projects tend to work best.
(Credit: R4D)
Researchers from R4D and the Ash Center will work with local civil society groups simultaneously in several countries to co-design a transparency and accountability intervention for health care, and then assess the results. The study is currently evaluating interventions in 200 communities in Indonesia and Tanzania as part of a pilot phase, and plans to expand to other countries in the coming years.
What methods is it using?
The project will use mixed methods, combining qualitative field research with a randomised control trial study – the first time this has been attempted simultaneously in more than one country. The main goal is to provide evidence that both people working on the ground and academics can use to improve health, accountability and citizen participation. The project has two phases:
Phase I: a mixed methods approach in Tanzania and Indonesia using randomised control trials (RCTs) to evaluate effects on health care quality/outcomes and community power relations, combined with qualitative studies – direct observation, focus groups, informant interviews and ethnographic methods – to assess the role of local context and produce more data on what mechanisms lead to impact.
Phase II: researchers will assess whether the intervention can be scaled and generalised beyond Indonesia and Tanzania. They’ll also look at what T/A interventions are appropriate for different sets of social, political, and economic circumstances. If results show that there is potential to apply the findings more broadly, a similar intervention will be tested in other countries. If Phase I interventions had little impact, Phase II will redesign new interventions that take this into account.
Here’s a detailed project outline with more detail on all of this. (The project is jointly funded by the William and Flora Hewlett Foundation, DFID and the Bill and Melinda Gates Foundation – and is brokered under T/AI).
Open Government Partnership (OGP)
Technology plays a key role in the OGP, a multilateral initiative in which governments commit to promoting transparency, combating corruption and promoting new technologies through a declaration, followed by an action plan developed with public consultation.
Hillary Clinton at the OGP summit in July 2011 (public domain)
It’s often difficult to use the OGP data to gauge the impact of tech for T&A because each country creates its own action plans, meaning that commitments aren’t always directly comparable. Commitments about Open Data and e-government make up almost one-third of all those included in the action plans, although there that these are not always explicitly linked to the broader reform efforts needed.
What methods is it using?
Impact in the OGP is determined by an independent body called the Independent Review Mechanism (IRM), which monitors countries’ progress against their action plans and according to the OGP’s stated values. IRM reviewers also assess progress by giving a qualitative judgement on whether progress on each element of a country’s action plan has social impact, as well as the extent to which it’s been completed. They can also use a ‘star’ rating that singles out a government commitment that they consider has significant social impact, is complete (or substantially complete), and is relevant to OGP values (of the 35 most recent countries to have completed OGP action plans, 24.7% were starred).
The OGP has created a comparative qualitative and quantitative database of the data collected since it started in 2011. The information gathered has yet to be used in contexts beyond the OGP, though some researchers have seen potential to do so.
Standards/indicators on open government and transparency
There’s a whole range of efforts to assess and rank government openness by measuring it according to sets of indicators – in fact, far too many to list here. These indicators typically include a mixture of objective measures (laws, signatures of treaties and so on) and subjective measures like responses to questionnaires or expert assessments. UNDP have a useful guide that explains the main uses and limitations of different indicators.
This report by Sheila Coronel provides an excellent sample of the key indices focusing on right-to-information. As she explains, lack of evidence isn’t the problem this time round:
‘The problem is not that transparency has not been measured enough: It has. But what we have today is a patchwork of ratings and indices evaluating various aspects of government openness. These measures cover different sets of countries, examine different spheres of government transparency, and use a variety of criteria and methodologies.’
There are good practical reasons for this: for a start, it would be prohibitively expensive to cover over 100 countries, and not all countries have enough local researchers to provide information for the range of indicators out there. Still, it makes it hard to assess the field as a whole, and still harder to make general statements about what it’s contributing to our understanding of impact on T&A. Interestingly, there’s debate about whether and to what extent transparency indices and ratings actually lead to reforms themselves – another area where there’s a need for further research.
One initiative particularly worth mentioning here is Open Government Standards, which attempts to define what ‘open government’ means from civil society’s point of view. The initiative, coordinated by Access Info Europe, identifies specific measures that an open government policy should contain – distinguishing them from concepts such as good government or e-government. Open Government Standards began in 2012, following criticism that the first OGP Action Plans of 2012 did not all relate to each other. For more detailed explanations of the standards, see Transparency, Participation and Accountability.
International Budget Partnership
In the area of budget advocacy, the International Budget Partnership (IBP) has done a lot of work on understanding and assessing impact, as part of its ‘long-term commitment to learning about how, when and under what circumstances CSOs’ budget advocacy has impact’. As well as producing a Super-Duper Impact Planning Guide to help organisations design full, considered impact plans, it’s undertaken a series of four-year real-time evaluations of its projects from 2009 to 2013. The resulting case studies (see here for full-length versions for Mexico, Brazil, South Africa and Tanzania) provide a different perspective on the fluid, often unpredictable nature of T&A projects and a more nuanced way of viewing impact.
What methods does it use?
The IBP appointed a team of external researchers to monitor advocacy campaigns in each of the four countries throughout their life-span, for a three-year period. The studies typically used primary research, such as interviews and government documents, alongside secondary sources, like published reports and articles.
For example, the team writing the case study for the Tanzania project (assessing the impact of advocacy on to the education sector by the NGO HakiElimu) used a longitudinal method that attempted to assess not only what had happened during the three-year period but why and how it had happened. They also used contribution analysis, a method of determining to what extent observed results are due to program activities rather than other factors.
(Credit: HakiElimu)
To do so, the case study team interviewed and held focus groups with HakiElimu staff members and others linked to the education sector in Tanzania to get their take on what changes had occurred during the whole project period. The team then asked interviewees for their explanation for the changes, considering both the impact of civil society and HakiElimu, as well as other possible explanations. Drawing on a review of secondary evidence, they attempted to test or substantiate both the observations of change, as well as the explanations for why they occurred.
It’s going to be a long haul before we have a reliable evidence base for what works and what doesn’t, and we’re looking forward to seeing how the projects and activists doing front-line advocacy work can contribute to and benefit from that work. This post is an effort to get an overview of the field, and is therefore unlikely to be completely comprehensive. Have we missed anything? Let us know.