In 2009, the results from two microcredit impact studies in Hyderabad, India, and Manila, the Philippines were released to mixed responses (Banerjee, Duflo, Glennerster, and Kinnan 2010; Karlan and Zinman 2011). Some media declared microfinance a failure (Bennett 2009). Many in the microfinance community dismissed these randomized studies as too limited to be a true reflection of the entire sector. These first randomized studies caused a sensation because they challenged the dominant impact narrative for microcredit—a narrative that rests on loans to capital-constrained microentrepreneurs who earn a steep return on marginal capital and thus can repay a relatively high interest rate and reinvest to grow out of poverty—and the way in which that narrative had been universalized in the popular imagination. In fact, the results were more nuanced. What the microcredit studies really showed is that this model of microcredit works for some populations—those who successfully grow businesses—but not for others. Many now agree that the expectations for microcredit in the popular discourse were overblown. For some, the pendulum had swung: far from a panacea against poverty, some argued that microcredit was actually doing harm. The evidence supports neither extreme view. In fact, the results of the studies aligned with and confirmed some of the evidence from nonrandomized methods already in the microfinance research literature that found modest but neither revolutionary nor deleterious impacts from credit. While the concept of capital that will allow poor people to unleash small business opportunities remains valid for some poor clients, not every borrower is a microentrepreneur—take-up rates for credit products are often surprisingly low, and not all economic activities that poor people engage in yield high returns. Microcredit is not transforming informal markets and generating significantly higher incomes on average for enterprises. And yet the industry has focused almost exclusively on the rhetoric of entrepreneurship and has overlooked the many important benefits to households that are using loans to accelerate consumption, absorb shocks, or make household investments, such as investments in durable goods, home improvements, or education for their children.
This paper uses a public economics framework to review evidence from randomized trials on domestic water access and quality in developing countries and to assess the case for subsidies. Water treatment can cost-effectively reduce reported diarrhea. However, many consumers have low willingness to pay for cleaner water; few households purchase household water treatment under retail models. Free point-of-collection water treatment systems designed to make water treatment convenient and salient can generate take-up of approximately 60%at a projected cost as low as $20 per year of life saved, comparable to vaccine costs. In contrast, the limited existing evidence suggests that many consumers value better access to water, but it does not yet demonstrate that better access improves health. The randomized impact evaluations reviewed have also generated methodological insights on a range of topics, including (a) the role of survey effects in health data collection, (b) methods to test for sunk-cost effects, (c) divergence in revealed preference and stated preference valuation measures, and (d) parameter estimation for structural policy simulations.
This paper shows how the productive interplay of theory and experimental work has furthered our understanding of credit markets in developing countries. Descriptive facts motivated a body of theory, which in turned motivated experiments designed to test it. Results from these experiments reveal both the success and the limits of the theory, prompting new work to refine it. We argue that the literature on credit can be a template research in other domains.
Until recently rigorous impact evaluations have been rare in the area of finance and private sector development. One reason for this is the perception that many policies and projects in this area lend themselves less to formal evaluations. However, a vanguard of new impact evaluations on areas as diverse as fostering microenterprise growth, microfinance, rainfall insurance, and regulatory reform demonstrates that in many circumstances serious evaluation is possible. The purpose of this paper is to synthesize and distil the policy and implementation lessons emerging from these studies, use them to demonstrate the feasibility of impact evaluations in a broader array of topics, and thereby help prompt new impact evaluations for projects going forward.
We present new evidence on the randomization methods used in existing experiments, and new simulations comparing these methods. We find that many papers do not describe the randomization in detail, implying that better reporting is needed. Our simulations suggest that in samples of 300 or more, the different methods perform similarly. However, for very persistent outcome variables, and in smaller samples, pair-wise matching and stratification perform best and appear to dominate the rerandomization methods commonly used in practice. The simulations also point to specific recommendations for which variables to balance on, and for which controls to include in the ex post analysis.
Experimental economists believe (and enforce the idea) that researchers should not employ deception in the design of experiments. This rule exists in order to protect a public good: the ability of other researchers to conduct experiments and to have participants trust their instructions to be an accurate representation of the game being played. Yet other social sciences, particularly psychology, do not maintain such a rule. We examine whether such a public goods problem exists by purposefully deceiving some participants in one study, informing them of this fact, and then examining whether the deceived participants behave differently in a subsequent study. We find significant differences in the selection of individuals who return to play after being deceived as well as (to a lesser extent) the behavior in the subsequent games, thus providing qualified support for the proscription of deception. We discuss policy implications for the maintenance of separate participant pools.