Impact and Scale-Up: An Interview with Annie Duflo and Dean Karlan, Part 2

Impact and Scale-Up: An Interview with Annie Duflo and Dean Karlan, Part 2

Template G Content Blocks
Sub Editor

Last month, Annie Duflo assumed the executive directorship of IPA, replacing Dean Karlan, who will stay on as President. Annie has been with IPA for over three years serving as Vice President and Director of Research. With this shift in leadership, we wanted to sit down with Annie and Dean to hear directly from them about the new organizational structure and what lies ahead as IPA looks toward its tenth anniversary. This excerpt from the larger interview focuses on the origins of IPA and the organization’s scale-up efforts. We posted Annie and Dean’s comments on their goals for the next year in an earlier post.

-----

Interviewer: The origin story about IPA is that Dean joined like-minded researchers with a vision of applying rigorous research methods to really figure out what works in development. As you close in on the 10 year mark, how is IPA today different from what was originally imagined?

Dean: We are doing less scale-up [of proven programs] than was originally intended—I originally envisioned more cases of IPA getting actively involved in direct implementation and scale-up right after a study ends.

And, on a related point, we are doing less work guiding organizations and firms as they build their own internal research departments. I had this image at the beginning that we were going to guide a bunch of banks and larger organizations into creating in-house research practices, where they could take the randomization techniques that we show them and do other tests that are not academically interesting but would be hugely useful to them for program or product design.

The example I remember talking about nine years ago was the Balsakhi remedial education program [which brings teaching assistants to work in small groups with poor performing students]. The academic question was, “Does Balsakhi work?” We know now that it does. But follow-on questions like, “What’s the optimal ratio of students to teaching assistants—should it be four to one; eight to one; twelve to one?” That’s not a question you are going to get many academics excited about spending a year and a half and a couple hundred thousand dollars figuring out. But if you're a large organization running remedial education, you really ought to know the answer. And it's not so hard to set up a test to figure that out.

So we're doing less of that type of research, things that are distinctly non-academic but that are necessary for policy. We should do more, but naturally everything has its tradeoffs, and our resources have not been focused on those types of questions.

Interviewer: So let’s return to the subject of scale-up: Why is IPA doing less scale-up than you originally intended?

Dean: It takes a lot to get to a point where you really do have a clear policy picture. Research moves us, it moves our priors, pushes us towards certain ideas over others, or certain ways of rolling out programs versus others, but it doesn’t always lead to a packaged “do this” answer.

There is also this underlying question of how many times we test something before we feel confident we can go to a new context, evaluate whether the context is appropriate, and then scale it up. This is a question we always struggle with, as we don’t want to oversell any one result, but at the same time we don’t want to stop action from happening until total certainty is reached (nothing will ever be “certain,” naturally). So we are still learning how to make this tradeoff between gathering more evidence and jumping into the fray and working to see evidence change policy.

Last, there is the issue of figuring out IPA’s appropriate role relative to scale-up. What is our comparative advantage? When is a situation right for us to be doing something versus convincing others to take action?

Annie:In order for a proven program to be scaled, one needs to go beyond disseminating research results. Scaling a program requires very different skills from starting a new research project, and it requires a different level of funding. Whether we are working with a government or doing a program in-house, scale-up programs require a huge amount of work before we even start providing services. It's an upfront investment in resources, manpower, and types of skills required.

Interviewer: Annie, you played a leadership role in bringing the Balsakhi remedial education program to Ghana, where IPA is now working with the Ghanaian government to scale it up as the Teacher Community Assistant Initiative (TCAI). What was it about remedial education in particular that showed promise for expanding it there?

Annie: TCAI adapts components of several programs that have been evaluated and that have been shown to be successful [including the Balsakhi remedial education approach tested in India, and the extra teachers tested in Kenya]. So, the underlying concepts of TCAI had been evaluated in different contexts and we were quite confident that the results could be generalized. Of course, the way it’s being done in Ghana is not exactly the same way it was done in India or in Kenya, since the program had to be adapted to the local context. But it was one of the programs where we felt we had enough evidence to go one step further.

Interviewer: IPA has a number of programs that you feel are sufficiently well-proven to bring to scale. In addition to TCAI, there is also school-based deworming and chlorine dispensers for safe water. Is it your opinion that these efforts represent the areas where IPA has had the greatest impact over the past ten years, or do you see things differently?

Dean: They represent the ones for which there's a very clear prescription to make, but the impact of IPA’s research goes well beyond these examples. 

A simple example of impact that may potentially be much bigger is on micro?credit and the dozens of studies we have done in that space. The outcome of that work has been to say, look, there's nothing wrong with micro?credit, but microfinance is more than just loans. Microfinance institutions need to be more client-focused, providing the right service at the right time, which may mean savings or insurance, and not just loans. And for the donor, if you're trying to maximize the benefit you generate with a $10 million donation, microcredit is probably not the place to go. For one, investors are happy to fund most microfinance institutions. So let them! Second, ideas beyond mere credit are proving potentially more effective.

Now, the thing with any evaluation that shows a negative result is that it's really hard to say who you're affecting with those results. Take One Laptop Per Child. There were studies conducted, not by IPA, that showed the One Laptop idea was not working (the studies were after about a year, so hardcore believers may argue that longer term results will show a change; but the process changes didn’t seem to appear after a year either, so this seems like a hard case to make). How do we measure the impact of that study? How can you count the dollars that are not donated to something?  

Annie: The impact we are having here is on influencing the debates. As an example, the studies on bed nets showed that charging for bed nets reduced the number of people who used them. Those studies influenced the debate on pricing health products a lot. Being able to shift the debate is very important, and IPA research contributes to that.

December 31, 2011