English

There is a long tradition in development economics of collecting original data to test specific hypotheses. Over the last 10 years, this tradition has merged with an expertise in setting up randomized field experiments, resulting in an increasingly large number of studies where an original experiment has been set up to test economic theories and hypotheses. This paper extracts some substantive and methodological lessons from such studies in three domains: incentives, social learning, and time-inconsistent preferences. The paper argues that we need both to continue testing existing theories and to start thinking of how the theories may be adapted to make sense of the field experiment results, many of which are starting to challenge them. This new framework could then guide a new round of experiments.

Authors:
Country:
Type:
Working Paper
Date:
January 01, 2006
English

This paper discusses the role that impact evaluations should play in scaling up. Credible impact evaluations are needed to ensure that the most effective programs are scaled up at the national or international levels. Scaling up is possible only if a case can be made that programs that have been successful on a small scale would work in other contexts. Therefore the very objective of scaling up implies that learning from experience is possible.

Authors:
Country:
Type:
Report
Date:
January 01, 2004
English

This paper reviews recent randomized evaluations of educational programs in developing countries, including programs to increase school participation, to provide educational inputs, and to reform education. It then extracts some lessons for education policy and for the practice and political economy of randomized evaluations

Authors:
Country:
Program area:
Type:
Working Paper
Date:
May 01, 2003

Pages