English
We provide evidence from field experiments with three different banks, that reminder messages increase commitment attainment for clients who recently opened commitment savings accounts. Messages that mention both savings goals and financial incentives are particularly effective, while other content variations such as gain versus loss framing do not have significantly different effects. Nor do we find evidence that receiving additional late reminders has an additive effect. These empirical results do not map neatly into existing models, so we provide a simple model where limited attention to exceptional expenses can generate under-saving that is in turn mitigated by reminders.
Country:
Program area:
Type:
Published Paper
Date:
January 19, 2016
English
This paper tests whether uncertainty about future rainfall affects farmers’ decision-making through cognitive load. Behavioral theories predict that rainfall risk could impose a psychological tax on farmers, leading to material consequences at all times and across all states of nature, even within decisions unrelated to consumption smoothing, and even when negative rainfall shocks do not materialize down the line. Using a novel technology to run lab experiments in the field, we combine recent rainfall shocks and survey experiments to test the effects of rainfall risk on farmers’ cognition, and find that it decreases farmers’ attention, memory and impulse control, and increases their susceptibility to a variety of behavioral biases. In theory, insurance could mitigate those effects by alleviating the material consequences of rainfall risk. To test this hypothesis, we randomly assign offers of an index insurance product, and find that it does not affect farmers’ cognitive load. These res...
Country:
Type:
Working Paper
Date:
January 19, 2016
Download
Administrative data refers to data collected for the administration of programs. It should be systematically collected, stored and used for program operation and management decisions. While administrative data is designed to track a program’s implementation—primarily the project’s activities and expenses—it can also include indicators on program outcomes. Examples of administrative data include educational records, client information from financial institutions, and hospital records of patient visits and health outcomes. Other examples include information held by government agencies, such as tax filings and Medicare claims. The CART principle of responsibility tells us that organizations should find the right balance between collecting enough data necessary to obtain credible, actionable information about a program, and the costs of doing so. Administrative data, due to its low cost and accuracy, can be an important part of a data collection strategy, useful for both monitoring and eva...
Type:
Goldilocks Toolkit
Date:
January 10, 2016
Download
The struggle to find the right-fit in monitoring and evaluation (M&E) resembles the predicament Goldilocks faces in the fable “Goldilocks and the Three Bears.” In the fable, a young girl named Goldilocks finds herself lost in the forest and takes refuge in an empty house. Inside, Goldilocks finds a large number of options: comfy chairs, bowls of porridge, and beds. She tries each, but finds that most do not suit her: the porridge is too hot or too cold, the bed too hard or soft – she struggles to find options that are “just right.” Like Goldilocks, organizations have to navigate many choices and challenges to build data collection systems that suit their needs and capabilities. How do you develop data systems that fit “just right”? Over the last decade and a half, nonprofits and social enterprises have faced increasing pressure to prove that their programs are making a positive impact on the world. This focus on impact is positive: learning whether we are making a difference enhanc...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Acumen: Defining Impact in Impact Investment Acumen raises charitable donations to invest in enterprises that help solve some of the world’s toughest social problems. As a non-profit impact investor, the organization invests with ‘patient capital’ meaning it invests in seemingly risky markets that may require working over a longer time horizon to develop viable businesses producing goods and services that benefit the poor. Through these investments, Acumen aims to maximize social return while also turning a profit, which also supports the sustainability of the enterprises in the long run. Among supporters of impact investment, this is known as a “third way” for international development assistance, occupying a space in between traditional philanthropy and for-profit private enterprise. Acumen has been a leader in pushing for a more concrete definition of social impact in impact investing. The organization prioritizes two things in its own approach to monitoring and evaluating its inves...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Women for Women International: Monitoring and Evaluation in Conflict and Post-Conflict Settings Women survivors of war and conflict are disproportionately affected by acts of violence, displacement, poverty, and loss of property and relatives. Conflict disrupts familial and community networks, compelling women to assume greater responsibility for generating household income and supporting their dependents and community. Women for Women International (WfWI) works in countries affected by conflict and war and addresses these issues by supporting women to earn and save money, improve health and well-being, influence decisions in their home and community, and connect to networks for support. This case study examines WfWI’s collection and use of data in conflict and post-conflict settings to monitor and measure the results of their work. Despite the challenging setting, WfWI has developed a data collection system that produces high quality data and is in the process of making important chan...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Theory of Change: Laying the Foundation for Right-Fit Data Collection The first step in designing a right-fit data collection strategy is to create a solid theory of change. A theory of change is a clear visual map that represents how a program will make an impact on the world. It illustrates what goes into a program, what gets done, and how the world is expected to change as a result. A theory of change supports right-fit data collection in several ways: by pointing organizations to the elements of the program they need to track to ensure it is operating as planned; by providing a foundation for impact measurement by differentiating the outputs to be tracked from the outcomes to be measured using a credible counterfactual; and by generating credible research questions. The Goldilocks Initiative does not offer a complete manual on building a theory of change—many resources exist for that—but here we break down the basics of creating a theory of change and explain how a clear theory, to...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
A mobile phone in every person’s pocket will soon be a reality. What does this mean for development organizations? This report reviews the current state of mobile technology for survey and telephonic data collection, activity monitoring, and impact measurement. It also addresses the notion of crowdsourcing, and the various ways it is used to improve organizational decision-making.
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Theory of change is the foundation of a right-fit monitoring and evaluation strategy. A theory of change provides a road map that outlines how a program will work to change outcomes and deliver impact. It identifies the key assumptions of the program and risks to successful implementation, and helps organizations pinpoint the data they need to collect. When done well, a theory of change should also enjoy widespread buy-in from staff throughout the organization. This helps to focus data collection activities on the most important questions about implementation and impact. Building a theory of change with solid theoretical foundations and widespread buy-in requires organizations to invest time and resources into a process with multiple steps and participation at all levels of the program. Chapter 3 of The Goldilocks Problem outlines the key elements that make up a theory of change and walks through several examples. In this article we outline some of the preparatory work needed to guide...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
One key message of the Goldilocks Initiative is that impact evaluation is not for everyone. Yet, even when measuring impact is not feasible, social enterprises and non-profits can still answer important questions about their programs using rigorous measurement techniques. One of these is techniques rapid-fire testing: randomized trials that compare the effect of related interventions on a single, immediate (or short-term) outcome. This method is used to test operational issues and aims to influence immediate outcomes, such as product take-up, program enrollment, loan repayment, and attendance, among others. In rapid-fire tests, participants are randomized into different treatment groups (and sometimes, but not necessarily, a pure control group) and exposed to variations in a program’s design or message. The outcome of interest (usually program take-up or use) is measured and compared across treatment and control groups. Often outcomes are measured administratively, so that there is no...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Satellites are mobile, remotely controlled communications systems that orbit the planet, capturing imagery and other data for transmission back to Earth. While satellites can provide relatively high-resolution imagery of the entire globe, historically they have been operated by government agencies and a small number of companies. The instruments themselves traditionally cost between $200 and 500 million dollars, and leverage billions of dollars of public sector investment in research, development, and maintenance. Access to imagery has thus been available to a limited set of organizations, including government space agencies, research institutions, and corporations with the analytic capacity to use satellite data for business intelligence and decision-making. In recent years, there has been a rapid trend towards small private organizations sending their own satellites into the sky. Because these are much smaller in size, have shorter life cycles, and much lower upfront costs (as little...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Organizations face many challenges in measuring their impact using rigorous evaluation methodologies such as randomized controlled trials (RCTs). While technical requirements—such as sufficient sample size—are often an issue, for most organizations the main challenge is overcoming internal and external resistance to impact evaluation. Some objections are well placed. For example, most operational learning does not require impact evaluation. And for some organizations, a focus on measuring impact may not be appropriate, particularly when it takes resources away from other strategic priorities and program monitoring. External pressures from donors or other stakeholders can be the driving force behind impact evaluations, with both positive and negative consequences. When there is little regard for accuracy or quality, organizations waste resources that could have helped to improve their programs.
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
Sensing technologies are ubiquitous in most developed markets, where they are used for industrial process monitoring, product tracking, and information services. More recently, non-governmental organizations (NGOs) have begun leveraging sensors for supply chain management, remote monitoring, and consumer product testing. This report describes how sensors work, and how they can be harnessed for data collection in low-resource settings.
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
At the Goldilocks Initiative, we argue that organizations should be doing two things: monitoring what they do and evaluating the impact of what they do. And we’ve argued that the impact part of the equation is often prioritized over the monitoring part. As a result, we are often evaluating the impact of programs before we know whether they are well implemented. Consider what happens if we boil down the recipe for organizational impact to a simple formula: A x B = Impact In this formula, “A” means doing what you said you would do and doing it efficiently, and “B” means choosing good ideas that actually work. If only life were that simple. Although not everything sorts quite so cleanly, the terms monitoring and evaluation roughly align with this formula. Think of monitoring as “A” and evaluation as “B.” Much academic work focuses on “B,” evaluating the social impact of programs, particularly the work of development economists running randomized evaluations. But organizations should nev...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
For many organizations, a central goal of monitoring and evaluation is to prove that programs are making a difference—that they have an impact. Not only is it important to know if programs work, but providing hard proof can attract much-needed funding and may also improve an organization’s reputation. The reality, though, is that not everyone can and should measure impact: sometimes it’s not possible to muster up a sample size that would be large enough to conduct a good study, or there simply isn’t anything to randomize. When programs are structured such that impact measurement is possible, it’s still important to approach evaluation carefully. Impact measurement that aligns with the CART Principles must be well-designed, implemented well, and timed appropriately. If the evaluation design or fieldwork are sub-par, results will be biased (meaning wrong), which can lead organizations and policymakers to start or continue programs that have little or no impact, or to miss opportunities t...
Type:
Goldilocks Toolkit
Date:
January 05, 2016
Download
IPA assembled this set of resources for use in designing and running an impact evaluation. Beginning with the need for a theory-driven evaluation, and ending with a set of concrete tools to use in running an evaluation, they cover a range of practical materials useful for organizations that are considering a rigorous impact evaluation. This set of resources cover the following topics: theories of change and impact evaluations determining the right timing for an evaluation selecting the right evaluation methodology designing and managing a randomized evaluation calculating sample size and doing power calculations mobile data collection administrative data data management and analysis For organizations with strong internal technical capacity, these resources will provide valuable practical guidance for designing and implementing a credible impact evaluation. And for organizations without the internal capacity to design and run an impact evaluation, these resources will help key staff mem...
Type:
Goldilocks Toolkit
Date:
January 04, 2016
Download
IPA has compiled this set of resources on data collection and storage to aid in the development of monitoring and evaluation plans. When used in conjunction with a solid theory of change, these resources can help to ensure the collection of credible, actionable, and transportable data. The topics covered include: Designing survey instruments Generating credible survey questions Power calculations Using administrative data for evaluations Using mobile or paper-based surveys Data and code management
Type:
Goldilocks Toolkit
Date:
January 04, 2016
Download
IPA assembled this list of resources for use in designing a program and developing a theory of change. The following resources can be useful throughout the design phase of your program—showing you how others have defined similar problems and in what contexts, which interventions others have tried, and how large the effects of these programs have been found to be. This will be useful in designing the program. These resources can also be useful for thinking ahead to a future evaluation. Many of them describe: the research methods, such as the evaluation type, sample size and sampling strategy; the research questions; the timing of the evaluation; and the outcomes measured. There is no one-size-fits-all evaluation plan, but reviewing what others have done and how they have overcome specific challenges can help you think through options for your own evaluation strategy. Though far from exhaustive, the resources here are a good place to start when designing your program or theory of change....
Type:
Goldilocks Toolkit
Date:
January 04, 2016
Download
IPA assembled this list of resources for use in designing a monitoring system and understanding monitoring as good management. This set of textbook chapters, articles, blog posts, and organizational guides provides a range of information and perspectives on monitoring and management. They draw from a variety of sectors, from health to education to nonprofit and business management. These resources cover: designing a process-evaluation (monitoring) plan methods for monitoring service utilization and program organization detailed steps in building a monitoring system monitoring as management building organizational culture around monitoring strengthening the use of data in program monitoring Taken together, these resources will aid in the development of a monitoring plan and system that reflects CART principles and incorporates useful management practices. If thoughtfully applied, they should help develop an organizational culture built around monitoring and data, one that will help to i...
Type:
Goldilocks Toolkit
Date:
January 04, 2016
Download
IPA has compiled this set of resources on the theory of change and its relationship to program design. They take readers from needs assessments and the importance of the theory of change, through practical discussions of how to develop a theory of change and identify assumptions. These resources cover the following topics: importance of a theory of change needs assessment developing a theory of change identifying assumptions using a theory of change in program design and monitoring theory of change for donors and grantees A solid theory of change is the foundation of strong program design and a sound monitoring and evaluation strategy. These resources go into the depth necessary to build a theory of change and link it to a CART monitoring strategy.
Type:
Goldilocks Toolkit
Date:
January 04, 2016

Pages