The Pros and Cons of Understandable Research

The Pros and Cons of Understandable Research

Template G Content Blocks
Sub Editor
 
Editor’s note: Scott Guggenheim is a Social Policy Adviser at AusAID and participated in the Post-Conflict recovery panel at the Impact and Policy Conference held in Bangkok.
 
I came to the ADB, IPA and JPAL Impact and Policy Conference  as an anthropological tourist rather than as a native speaker of economics. It was a terrific workshop, full of interesting and often counter-intuitive original research. Of the sessions that I attended, two sets of papers stood out as particularly relevant for my own work. Rema Hanna’s paper on the unexpected lack of elite capture in community-based targeting for social safety nets in Indonesia shows that a red-flag risk that we all take to be true virtually as a matter of course turned out to be surprisingly insignificant. Fotini Christia and Andrew Beath’s paper on building certain development and representational services around community decision-making in Afghanistan provided more fodder for the community side of the Great Participation Debate. As I’ll discuss a bit more below, this controversy is in urgent need of less noise and more balance given the potential benefits that a community approach can offer stretched governments, so it’s good to see the rise in the number of rigorous studies that put to the test operational hypotheses. It’s even better to see a rise in the number of rigorously validated, positive results.
 
The other paper that blew my socks off was Gharad Bryan’s piece on migration in Bangladesh, which showed that investing $6 to help poor people migrate yielded $100 in returns. Now, I have to confess that ever since my earliest days doing farm surveys in the northern Andes, where nobody under 30 wanted to stay in farming, I’ve always been interested in migration as an anti-poverty strategy that is a useful alternative to traditional programs for agriculture and rural development. Still, for most parts of Asia where I work, I can’t think of any investment these days that could yield a return like that to poor, landless rural households, and yet there have been remarkably few rural development strategies that are built around making migration safer, cheaper, and less risky. Of course we know why: most rural development agencies want to see people stay, not go; migration labour market failures are usually about poor governance that is not susceptible to technical solutions; and development agencies are themselves not set up to deal with non-spatial, international programming. But the paper implicitly makes a powerful case that we practitioners need to be embracing these problems rather than just avoiding them.
 
Undoubtedly my biggest surprise of the whole event was that I could actually understand most of what was being said. Technical contributions aside, from an informed outsider’s perspective surely one of the biggest contributions made by Abhijit Banerjee, Esther Duflo, Ben Olken, Rema Hanna and other modern economists who deal with public policy has been to show that even economists can speak in normal language and not get laughed at by their peers. It’s a great trend.
 
Being understandable has its downsides, however. As I said in the session for which I was responding to researchers’ and practitioners’ presentations, one distressing realization is that for all of the technical sophistication for measuring results that was on glorious display in the room, a fair amount of unchecked error and distortion is also creeping into the field. Much is of the “not quite right” variety rather than being out and out wrong. I could see this in the several studies where I had firsthand knowledge of what the researchers actually found versus what was being reported, particularly when study findings are reported secondhand.
 
None of these distortions are fatal nor are they in any way deliberate misreadings. On the other hand, neither were they the first time I’d heard such misinterpretations, and I can only assume that similar problems are arising with other studies. This leakage of misinterpretations into the “common sense” general literature is going to increasingly hinder our accumulation of how rigorously gathered evidence can advance our knowledge about these important poverty issues if there aren’t more people taking a step back to assess what it is that we really do know and with how much confidence we know it.
 
Another point that struck me in the sessions was how much this new generation of evaluators believe in carrying out hands-on fieldwork. Of course this reflects my own background in anthropology. But I really do think that extended immersion and hands-on dialogue with reflective subjects is part of an emergent epistemology that treats an understanding of actors’ norms, intentions, and strategies as critical parts of what needs to be explained.
 
That said, it wouldn’t hurt the researchers to be a bit more rigorous in how they present their qualitative work. There’s a lot of research on how many ways bias can creep in through language, class, and so on; similarly, as any economist knows, the variance around a mean can be as significant as the mean itself, particularly when we are talking about winners and losers from different types of development actions.
 
I was also struck that academics’ incentives -- for their written-up drafts to survive peer review and to get the results to press before the competition does -- inherently leads to very static analysis. In the sessions that I attended, longitudinal research consisted of repeating the study a year later and a really long-term study would test results for three whole years. But for a policy maker dealing with national programs, scaling up an intervention based on a one-time study is very risky. In real life poverty projects, people learn to game systems quickly. Positive results often benefit from the “aha” factor that can initially help them get past political economy obstacles. In fact we used to joke about the “poverty project lifespan theorem,” which said that it would never take more than five years for local elites to figure out how to capture any pro-poor program. Given the cost of scaling up even the simplest interventions, it would be nice if we could be more confident that good results from experimental research are likely to be sustained over time.
 
By the end of the workshop it was impossible not to be thinking how much better development projects could be if we who work on them had the time and access to be able to easily and consistently draw on this sort of cutting edge research. And yet while it’s fun to dream about moving development preparation upstream so that we can all become knowledge-based actors, it’s at the same time good discipline to think about what you’d be willing to give up in order to command all of this literature. Would theoretically stronger designs let us spend less on project supervision? Maybe – but I doubt it. Could we simplify project processing so that I could attend more conferences like this one and fewer procurement reviews? Perhaps – but simplification exercises often seem to make processing more rather than less complicated. It’s wonderful to see the global discourse on results generating such a renewed interest in knowledge and evaluation, but now it’s time to start getting into the nuts and bolts work of making conferences like this one bring substantive changes to the daily life of the development practitioner.
#impactpolicyconf
October 22, 2012