Frequently Asked Questions
IPA uses randomized evaluations to measure impact because they provide the most credible and reliable way to learn what works and what does not. Randomized evaluations use the same methods frequently used in high quality medical research and rely on the random assignment of a program or policy to measure its impact on those that received the program compared to those who did not.
What is a randomized evaluation?
Randomized control trials, randomized controlled trials, randomized evaluations, rigorous evaluations, impact assessments, they are all a slightly complicated way of saying something really quite simple. If we want to know how effective a program is, we need to have a comparison group. Without a comparison, we can't really say anything about what would have happened without the program. And the only way of having a fair comparison group is with random assignment.
How do randomized evaluations work?
In the simplest kind of study, the group we are looking at is divided randomly in two. One group receives the benefits of a program or intervention, and the other does not. We are basically flipping a coin for each person to decide whether they receive a program or not.
With random assignment, each person in the study has the same chance of being assigned to a group as another person. This ensures that as long as the two groups are large enough, they will on average be statistically identical. Any change which we then observe between the two groups can be attributed entirely to the program or intervention itself, rather than other external or unobserved factors.
What is a replication?
In addition to randomized evaluations, IPA seeks to replicate the program and the evaluation in several contexts to make evaluations findings directly applicable to policy goals and the scaling up of effective strategies.
What do IPA partner organizations gain from their involvement in IPA projects?
IPA partners gain important insight into which of their interventions work and why they work, and also have the opportunity to adopt lessons learned from other innovative and proven programs. Additionally, IPA partners gain valuable experience conducting rigorous evaluations, and learn about how to use rigorous monitoring and evaluation methodologies as our aim is to empower local organizations to use results beyond the term of any one evaluation. Our partner organization, the MIT Jameel Poverty Action Lab, offers executive education two to three times per year to help guide organizations in successfully conducting randomized evaluations.
What does IPA do other than impact assessments?
IPA strives to transform its findings and insights into innovative action. We disseminate the evidence we generate to development practitioners and, where appropriate, work closely with partners to facilitate the replication of effective programs to other areas of the world. For example we are working with the Government of Ghana and Indian NGO Pratham to take a successful education program from India to Ghana. In many cases we also work with organizations to design the innovations that are then evaluated, such as a commitment savings product that encourages savings among bank clients in the Philippines.
What is IPA’s relationship to the Jameel Poverty Action Lab (J-PAL) at MIT?
IPA is a close partner of the Abdul Latif Jameel Poverty Action Lab (J-PAL). The two organizations share a common mission and take similar methodological approaches to development policy evaluation. Both organizations have pioneered the use of randomized controlled trials to study the effectiveness of development interventions worldwide and have collaborated extensively on field studies involving randomized evaluations. Many J-PAL Affiliates are also IPA Research Affiliates or IPA Research Network Members. Innovations for Poverty Action and J-PAL attempt to bridge the gap between research and the policy world by creating and disseminating knowledge about what works to policymakers and practitioners around the world.
Is IPA affiliated with a particular university?
IPA is a US-based 501(c)3 nonprofit organization and does not have an institutional relationship with a particular university. Our research network includes professors at several leading universities, including Yale, MIT, Harvard, UC Berkeley, Columbia, NYU, Chicago, Princeton, Dartmouth, LSE, Sciences-Po, University of Ghana, Oxford University, University of the Philippines, and elsewhere.
Who funds IPA research projects?
IPA’s expertise is provided as a service to its partners. Funding typically comes either from an organization’s evaluation budget or from donors (large and small) who are particularly interested in learning the impact of their dollars invested. On occasion, IPA will secure direct funding for research and evaluation and then search for partners to evaluate specific innovations or programs. IPA also often works with its partners to identify potential funding sources and submit joint proposals. Studies have been funded either directly or indirectly by a variety of foundations in the academic, development, and policy research communities, including the Bill and Melinda Gates Foundation, the National Science Foundation, The World Bank, USAID-BASIS, the Asian Development Bank, the Consultative Group to Assist the Poor (CGAP), DFID, 3IE, the Inter-American Development Bank, and the Ford Foundation.
Can organizations afford randomized controlled trials?
Randomized evaluations cost less than people think relative to non-randomized evaluations. Evaluations, of course, can be expensive, but should be thought of as an investment in order to learn what programs work, and how to make real improvements. In the short run, randomized evaluations can cost less than some quasi-experimental evaluations because they allow for smaller sample sizes. In the long run, experimental evaluations are less risky and hence less costly because they provide more reliable information for improving operations.
How long do IPA studies take?
Studies can take as little as a few months, and as long as several years to complete, depending on the length of the intervention and the outcomes of interest. For instance, in the case of microfinance we are interested in learning how certain approaches are more or less effective for generating new clients. These can be done more quickly, since the outcome of interest is merely signing up for the service. Studies which measure impact clearly take longer, since one needs to wait a reasonable time period for the impact to occur.
How can I find out more about IPA's work?
If you want to know more, please write to us at firstname.lastname@example.org.