Reflections

Posted 10 June, 2014 by Christopher Wilson

DIY for M&E

Monitoring and Evaluation, also known as M&E, is one of those things whose very mention can shut down the most vital parts of the brain. Eyes glaze over. Heart rate slows. Because it’s an esoteric field, dominated by highly-paid and highly-specialized experts, most program staff don’t know much about it, except that it’s complicated and important to donors, giving it that uncanny ability to simultaneously intimidate and impress. For those of you weary of the untapped potential of M&E, we hear your concerns. The process may lead to more questions than answers, which means more time and energy. Still, there’s good reason for project staff working with tech and accountability initiatives to pull back the curtain and get their hands dirty with measuring and assessing their projects. If thoughtfully adapted to current contexts and objectives, less sophisticated M&E methods can easily outperform the most expensive of parachuting experts when it comes to learning and adaptation. And at the end of the day, learning and adaptation are the entire point; for donors as well.

The Guide

We’ve seen this with projects we support, and we know what a difference it can make to simply get an overview of the options. That’s why we recently produced a hands-on guide to developing a framework for tech and accountability projects to monitor their impact on the go. It presents the process of developing a framework in 17 distinct steps, each with a breadth of options and a discussion of costs and benefits. It doesn’t try to boil to technicalities of M&E into platitudes, but tries to provide a clear framework for engaging in measurement and making smart decisions about where and how to invest in learning.

The Wins

A do-it-yourself approach to project M&E can make measurement and learning more effective by saving resources, increasing impact, and surfacing new opportunities. This is simply a function of familiarity. When people doing the work design the measurement systems (and as much as possible by the people that work affects and serves), they’re more likely to account for the subtle factors of context and power that determine whether projects succeed or fail. This can save time and resources, and help projects more quickly hone in on the activities that actually make a difference. For projects using technology, this also presents an opportunity to use the data already being generated by outreach, websites, sms, or other platforms, and to learn how to watch that data for meaningful insights or challenges on the horizon. At bottom, developing your own measurement and learning framework is the best way to make sure that measurement is meaningful, rather than a box checking exercise for donors.

The fantastic thing is that generally this is also what donors want. Taking control of a project’s own M&E is an opportunity to take control of donor conversations about measurement and results. It’s a way to set a meaningful targets and manage expectations.

The Costs

Of course, developing and implementing a measurement framework is not free or easy. Even when avoiding expensive surveys and expensive consultants, thinking carefully about what makes a project succeed takes time. It will be most productive if everybody on the project team contributes and this means setting aside time and energy that could be used elsewhere. It might also involve funding additional project activities to collect data, paying licensing fees to access contextual data for baselines and comparisons, or buying hardware or software to easily track the data that is most meaningful. All this will vary from project to project, and there are always trade offs to be made between what a project invests and what it learns. But regardless of project contexts and objectives, the most significant hurdle is often pulling aside the curtain and understanding what measurement can and can’t do, assessing the actual costs and benefits. Because M&E is esoteric and intimidating, that initial learning curve is often a deal breaker.

With these hurdles in mind, we encourage you to take a look at a copy of the guide here. In short, a DIY approach to M&E helps us learn, makes us more adaptable, and cuts down on resource waste: this guide will show you where to start. If you’d like to try it, we’d love to hear your thoughts. We also provide direct support to advocacy initiatives using technology, so if you’d like help in thinking through M&E for your project, get in touch and and we can go from there.

New M&E Guide for Tech and Accountability Initiatives

Using technology for social change is hard.

Understanding how to measure that work’s impact, and whether technology is making a meaningful difference can be even harder. Yet there’s broad agreement that an iterative and reflective approach increases the chance of tech for accountability initiatives achieving meaningful impact.

Monitoring and understanding what works and what doesn’t helps projects to adapt in dynamic program contexts, to anticipate challenges, and to strengthen evidence based advocacy. Having a smart and efficient framework for monitoring can also help organizations to conserve resources and take control of conversations with donors about results.

Unfortunately, the world of monitoring and measurement is complicated, and most project staff don’t have the time or resources to become experts. To help meet this need, the engine room and WeGov have teamed up to produce Measuring impact on-the-go: a users’ guide for monitoring tech and accountability programming.

This guide presents a series of steps that tech and accountability initiatives can use to develop monitoring frameworks. It will work as a comprehensive roadmap or as an a la cart menu. It is not comprehensive, but provides an introduction to a number of relevant methods, tools and strategies, with links and recommendations for further exploration.

4 thoughts on “DIY for M&E”

[…] engine room just put out a guide for monitoring tech projects, “Measuring Impact On-the-Go.” Geber said NGOs can “do-it-yourself” when it comes to evaluation. There’s no need […]

[…] The Engine Room’s DIY For Monitoring and Evaluation […]

[…] from A users’ guide for monitoring tech and accountability programming, CC BY […]

[…] from A users’ guide for monitoring tech and accountability programming, CC BY […]

Leave a Reply

Your email address will not be published. Required fields are marked *

Related articles