To Make Progress Fighting Poverty, We Have to Take Methods Seriously

To Make Progress Fighting Poverty, We Have to Take Methods Seriously

Template G Content Blocks
Sub Editor
IPA enumerators
Behind every number in a paper, there can be hundreds, or thousands, of individual conversations.


Two months ago, I left a job where I had spent 20 years designing and conducting education evaluations at Mathematica Policy Research. Things were going well in the impact evaluation world. The quantity of studies has been growing rapidly in many sectors, including education. But the production frontier for generating new evidence is still too narrow. We spend too much time and effort creating evidence without generating enough scientific consensus and without seeing enough of the policy change that motivates us to do this work. I came to IPA to help leverage the scale of the hundreds of past and ongoing rigorous studies in many sectors all over the world that IPA and its partners conduct, to generate a new kind of research—research aimed at improving how we collect and use data to answer persistent social problems like poverty. In my new role at IPA, I am leading a research methods initiative that I describe in more detail below.

IPA has already raised money to bring a variety of researchers into this effort. But why do we need to fund efforts to improve analysis, measurement, and fieldwork? We have loads of things to research—do we really need "research on research"?

We need a research and methods push now because the technology and tools for doing social science are outstripping our ability to use them wisely. The democratization of surveys is making survey research both better and worse.

We need a research and methods push now because the technology and tools for doing social science are outstripping our ability to use them wisely.

Survey technology is getting cheaper and more sophisticated. Anyone can now open a free Survey Monkey or Google Forms account and begin pelting people with poorly worded, badly conceived questions just as easily as they can follow good survey practice. Web servers are getting less expensive and more modular, making computationally intensive techniques like Bayesian analysis and machine learning available to all—including those who might fall into various ethical traps, like baking past prejudice into predictive algorithms.

Meanwhile, more and more researchers are conducting randomized trials, but most of us are following the same pattern of testing one hypothesis at a time using classical frequentist methods. When the goal is to reject a null hypothesis at an arbitrary threshold of 0.05, the convention followed for publication in an academic journal, we succumb to p-hacking and publication bias. Investigators are writing their own surveys, each reinventing the wheel, without being able to draw on a library of tested, validated instruments. More and more people are running field experiments without realizing how important it is to do back-checks and data quality audits or knowing the best practices in surveyor screening, hiring, recruitment, and compensation.

We need to develop, validate, and forge consensus on how to measure hard-to-measure concepts

We need the next generation of survey techniques, observational methods, open science, and data publication standards to solve problems like this and get us more quickly and efficiently to the evidence that will improve policy choices and policy outcomes. We need to develop, validate, and forge consensus on how to measure hard-to-measure concepts like intimate partner violence, “financial health,” or socio-emotional learning. We need to learn from research in other disciplines and capitalize on the many hundreds of field studies already completed and ongoing to better understand how enumerators can influence measurement: to not see enumerators as simply tablet survey operators, but to think more deeply about the process that happens between a respondent and an enumerator when they’re asking very personal questions. We need to use this knowledge to develop new cost-effective ways to help ensure enumerators are getting the most accurate answers. We need technology solutions to make good scientific practices like pre-analysis plan registration and careful data publication routine and high quality.

Since joining IPA as its first chief research and methods officer, I've been spending a lot of time assessing the quality of every aspect of our research and helping smart people design the next generation of evaluation research methods. In collaboration with researchers at Northwestern's Global Poverty Research Lab and with J-PAL and our vast networks of partners, IPA is embarking on an ambitious Research Methods Initiative (read more about it here) with three strands:

1. Improving the methods we use to design and analyze the results of social experiments
2. Designing and validating smart survey techniques to measure elusive concepts like financial health, intimate partner violence, and socio-emotional learning
3. Running experiments to test innovations in fieldwork implementation

IPA and its partners have hundreds of completed RCTs and hundreds more going on now in 22 countries. These represent opportunities to learn lessons across studies

IPA and its partners have hundreds of completed RCTs and hundreds more going on now in 22 countries. These represent opportunities to learn lessons across studies, to fold mini-experiments into the ongoing research, and to curate and catalog data. This is a heavy lift and will require cooperation across the boundaries of organizations, countries, and disciplines, but the result will be a new generation of research and stronger faith in what carefully collected and analyzed data can tell us about the way forward.

Where does the broader research community fit in in this effort? Researchers currently working with IPA can apply for competitive funds for small grants to build methodological experiments into your ongoing research. These can be rigorous tests of the effects of alternative enumerator payment incentives, respondent incentive experiments, or tests of innovative survey design methods. We completed an initial round of submissions but are accepting applications on a rolling basis through December 31, 2019. We’re also hiring a PhD researcher to lead studies and work on the research methods initiative as well as a poverty measurement director. (I’m also looking for a data scientist and regional technical coordinators in Kenya and Uganda– please apply or share the job announcements with your network). If you are a donor or policymaker, you should reach out to me directly with ideas, suggestions, or gripes, whether here on Twitter, or email. In the coming months, we’ll be rolling out other aspects of this initiative and will post updates here in this blog.

June 25, 2019