April 14, 2020

This post is part one of a two-part series from our Right-Fit Evidence Unit on how funders and implementers should pivot their monitoring and evaluation systems during the COVID-19 crisis.


The COVID-19 crisis has changed the way the development sector operates. Many funders are working with their grantees as they pivot their usual development programming to accommodate the new reality—whether by transforming their work to a feasible remote model or incorporating specific COVID-19 response components like messaging around social distancing.

But when it comes to understanding impact, this unprecedented global crisis has put organizations in a double bind: getting the right monitoring data just got a lot harder, at the moment when it's needed most. In response, we’ve seen funders take two different paths: either deprioritizing Monitoring, Evaluation and Learning (MEL) altogether, or asking grantees to try to make their original MEL plan and indicators fit with the new activities as best they can. In fact, both approaches put program impact at risk.

In this moment of rapid adaptation, deliberate learning is more important than ever. Funders must allow and encourage grantees to field test their adapted models, validate assumptions and carry out rapid refinements.

In this moment of rapid adaptation, deliberate learning is more important than ever. Funders must allow and encourage grantees to field test their adapted models, validate assumptions and carry out rapid refinements.

Developing a right-fit MEL system just got harder...

In times of crisis, investing resources in MEL systems is likely a second-order priority. Organizations are scrambling to find a way to keep helping their beneficiaries as the context in which they operate changes by the minute. Monitoring data is often far from their minds.

In addition, collecting data in a world of social distancing presents serious challenges. Many organizations have put fieldwork on hold, cutting off a crucial source of information about the needs of their beneficiaries and how their programs are working. Most would need help to switch to remote data collection, whether via phone surveys, IVR, SMS or other methods.

...yet MEL is more crucial than ever...

We’ve seen funders, seeking to provide flexibility and rapid emergency response, be tempted to do away with any expectations of MEL to dedicate all resources and staff bandwidth to programming. Yet most of these adjusted implementation models have never been tested before, and many implementers are expanding into areas of programming that they have little prior experience with. In such cases, it is more likely than not that the first programming versions they try simply won’t work.
 

Without even having the data to know that programs are underperforming, implementers would be unable to change course—at a time where the stakes have rarely been higher.

...and pre-existing MEL plans are unlikely to be right-fit.

Other funders have been asking that grantees still use pre-existing MEL plans and indicators and follow them the best they can. Yet the assumptions that are immediately critical to verify in adapted programming are often radically different from the original MEL plan.
 

For instance, one of IPA’s partners in East Africa is preparing to distribute home learning guides to parents so they can engage their children in learning activities while school is suspended. The prior MEL planning focused on teachers and classrooms, but this new approach calls for us to test a totally different set of assumptions. Are parents actually receiving the learning guides? Can they read and understand them without guidance? Do they trust the source? Do they actually carry out the learning activities, and do they believe they are valuable? IPA’s Right-Fit Evidence Unit is helping this partner answer these questions so they can identify problems, test alternative models and magnify impact over time.

In a future blog post directed at implementers, we will expand on the kinds of data that implementers should focus on in the first stages of their adaptation to the crisis.

How funders can encourage learning and iteration

Now more than ever, funders should give grantees space, time, and MEL resources to rapidly test their new models and figure out what works and doesn’t. Encouraging grantees to test different delivery channels and models for instance, or committing to taking stock of their monitoring data after two weeks of operations, sends a strong message about the need for evidence-based delivery even under time pressure.
 

Once grantees are implementing new programs, funders should expect grantees to collect lean monitoring data for tracking and continuous improvement. Collecting carefully chosen data about implementation quality and beneficiaries’ changes in knowledge and behavior can provide meaningful and rapid insight about program performance, going beyond simple output measures like the number of SMS messages sent.

Funders, now more than ever, should still provide flexibility to grantees regarding the specific data they end up collecting and allow grantees to make changes to their indicators as the crisis evolves. But at the same time, they need to establish an expectation that learning and evidence-based course corrections are critical and that their grantees will need the right MEL data to make those possible.

If you or an organization you fund could use assistance designing or collecting lean MEL data as described here, reach out to IPA’s Right-Fit Evidence advisory unit at rightfit@poverty-action.org.


This blog post is part of a series about IPA's Research for Effective COVID-19 Responses initiative, or RECOVR, which is supporting immediate response efforts and providing longer-term research evidence to decision-makers working to mitigate the impacts of the crisis in the Global South.