The Obama administration – almost as geeked-out a bunch as us – is on board with rigorous evaluation of social policy. Peter Orzag, the director of the Office of Management and Budget, recently wrote that,
Rigorous ways to evaluate whether programs are working exist. But too often such evaluations don’t happen. […] This has to change [… ] Wherever possible, we should design new initiatives to build rigorous data about what works and then act on evidence that emerges — expanding the approaches that work best, fine-tuning the ones that get mixed results, and shutting down those that are failing.
[D]epending on what the administration considers “strong evidence,” these efforts risk sabotaging or marginalizing some of the most innovative attempts to solve intractable social problems. […] I worry that, in defining what constitutes “the best available evidence” of effectiveness, the OMB and federal agencies will follow the constricted approach of […] insisting that public and philanthropic support go only to programs shown to be evidence-based through experimental evaluation methods, preferably involving random assignment of participants to experimental and control groups.
(You can find the full text of her article in the August 26th issue of Education Week here, but it’s by subscription only. Sorry!)
Based on our read, Ms. Schorr seems to have two problems. First, she argues that experimental evaluation methods can’t be used to evaluate “the most promising strategies [which] are likely to be complex and highly dependent on their social, physical, and policy context.” Second, she asserts that the proponents of evidence-based policy think that experimental methods are the one and only approach to social policy evaluation.
If these two statements were both true, it would be cause for concern indeed. At IPA, however, we take issue with them both. Our response, printed in the September 23rd issue of Education Week as a letter to the editor, is reproduced below.
To the Editor:
At Innovations for Poverty Action, we disagree with Lisbeth B. Schorr’s assessment that rigorous evaluation methods will inhibit innovation in social policy. In fact, randomized trials offer the best chance to generate lessons on what works.
Ms. Schorr’s arguments in "Innovative Reforms Require Innovative Scorekeeping," (Aug. 26, 2009) reflect some common misconceptions about randomized control trials. Contrary to her assertions, few would argue that these should be applied universally. Rather, advocates support their strategic use to provide evidence on whether and how to scale up programs with the potential to improve the lives of many. Read India, for example, a program developed through randomized evaluations, has positively affected 21 million children receiving remedial literacy tutoring.
Far from inhibiting innovation, randomized control trials allow policymakers to test out new ideas before making massive spending decisions. They can also provide proof that particular ideas get results without having to rely on fads or rhetoric.
Nor is the application of randomized control trials as restricted as Ms. Schorr implies. Researchers today are able to test complex packages of interventions and dynamic processes. One example of our research, in Ghana, will measure the effectiveness of an “epicenter strategy,” or a community-determined set of development programs. Randomized trials are well suited to this type of evaluation, and are particularly useful for identifying impacts from particular combinations of complementary interventions.
Many of Ms. Schorr’s arguments about the need for innovation seem to support the use of well-conducted randomized control trials, which are characterized by an in-depth understanding of a program. The replication of evaluations in multiple contexts is equally critical to helping local communities adapt successful programs to their context.
We support the increased use of evidence-based policies. Time and again, when people need accurate results, they turn to randomized trials.
Innovations for Poverty Action
New Haven, Conn.