Media Coverage
The New Yorker
May 17, 2010

Within economics, Duflo and her colleagues are sometimes referred to as the randomistas. They have borrowed, from medicine, what Duflo calls a “very robust and very simple tool”: they subject social-policy ideas to randomized control trials, as one would use in testing a drug. This approach filters out statistical noise; it connects cause and effect. The policy question might be: Does microfinance work? Or: Can you incentivize teachers to turn up to class? Or: When trying to prevent very poor people from contracting malaria, is it more effective to give them protective bed nets, or to sell the nets at a low price, on the presumption that people are more likely to use something that they’ve paid for? (A colleague of Duflo’s did this study, in Kenya.) As in medicine, a j-pal trial, at its simplest, will randomly divide a population into two groups, and administer a “treatment”—a textbook, access to a microfinance loan—to one group but not to the other. Because of the randomness, both groups, if large enough, will have the same complexion: the same mixture of old and young, happy and sad, and every other possible source of experimental confusion. If, at the end of the study, one group turns out to have changed—become wealthier, say—then you can be certain that the change is a result of the treatment. A researcher needs to ask the right question in the right way, and this is not easy, but then the trial takes over and a number drops into view. There are other statistical ways to connect cause and effect, but none so transparent, in Duflo’s view, or so adept at upsetting expectations. Randomization “takes the guesswork, the wizardry, the technical prowess, the intuition, out of finding out whether something makes a difference,” she told me. And so: in the Kenya trial, the best price for bed nets was free.