For many organizations, a central goal of monitoring and evaluation is to prove that programs are making a difference—that they have an impact. Not only is it important to know if programs work, but providing hard proof can attract much-needed funding and may also improve an organization’s reputation. The reality, though, is that not everyone can and should measure impact: sometimes it’s not possible to muster up a sample size that would be large enough to conduct a good study, or there simply isn’t anything to randomize. When programs are structured such that impact measurement is possible, it’s still important to approach evaluation carefully. Impact measurement that aligns with the CART Principles must be well-designed, implemented well, and timed appropriately. If the evaluation design or fieldwork are sub-par, results will be biased (meaning wrong), which can lead organizations and policymakers to start or continue programs that have little or no impact, or to miss opportunities to expand effective programs.

In this section, we go through the common biases that get in the way of measuring impact and explain what it means to conduct credible, actionable, responsible, and transportable impact evaluation.

Publication type: 
Goldilocks Toolkit
Developing Organization: 
IPA
Date: 
January 05, 2016
Version: 
February 2016
Terms of Use: 

Copyright 2016 Innovations for Poverty Action. Impact Measurement with the CART Principles is made available under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Users are free to copy and redistribute the material in any medium or format.