No, We Don’t Need Methodological Review Boards

No, We Don’t Need Methodological Review Boards

Template G Content Blocks
Sub Editor

Should researchers have the freedom to perform research that is a waste of time? That was the question Daniël Lakens recently posed in a thought-provoking Nature commentary. To paraphrase his response to his own question: no, they shouldn’t—and to make sure researchers do not waste time and resources, what we actually need is a new type of review board. In his argument, Lakens cites his own experience as chair of his university’s ethical review board, where he saw many studies that would pass muster on purely ethical grounds but were still not worth doing because they were not well designed.

Social media chimed in with the predictable reaction of, “Oh no, not another review board! The last thing we need is more hurdles.”

As director of research methods at Innovations for Poverty Action (IPA) and the institutional official for the IPA Institutional Review Board (IRB), I understand where Lakens is coming from. But in the end, I side with his critics in cautioning against a traditional Research Review Committee (RRC). At IPA, we tried that approach for several years, but recently scrapped it for a more flexible process that ensures high quality and functions more as “support from a wise colleague” rather than gatekeeping from an all-knowing board.

It’s true there are flawed studies that survive IRB review because the board and its staff cannot feasibly screen them all thoroughly for design quality. And there are studies, even those designed by well-established senior academics, that have blind spots, have aspects that are not well specified or thought out, have unstated or unrealistic assumptions, have too many hypotheses, or are just not well explained. In such cases, being queried by someone from outside the study team can result in improvements, including greater clarity in the design.

Clearly, methodological review boards present a complex mix of pros and cons. IPA’s experience with such boards offers some cautions about what can go wrong and why we now take a different approach, based on lessons learned.

Blog Illustration
IPA's Research Review Committee functioned for years almost like an academic journal, relying on blinding and volunteer reviewers.

For years, IPA had a formal RRC, patterned after many institutions’ RRCs, particularly biomedical research units. IPA’s RRC functioned almost like an academic journal, relying on blinding (i.e. reviewer anonymity) and volunteer reviewers. It was designed as a quality control mechanism that would free IPA to offer partnership opportunities to a larger, more diverse group of principal investigators (PIs). But in practice, IPA’s RRC often functioned as a gatekeeper. In fact, it was too restrictive a gatekeeper (and there were too many studies to review), so over time we allowed researchers to seek exemptions based on criteria such as the PI’s experience with IPA or affiliation with J-PAL, a network of academics who passed selective screening criteria.

As the number of exemptions grew, the purpose of IPA’s RRC became less clear and its process unraveled. Studies suddenly had more co-PIs, including those who happened to meet the exemption criteria. Underpowered studies would be passed off as “pilots” until it was time to publish. Failing to be exempted from RRC review perversely sent a message to researchers that we somehow didn’t trust them or thought we knew their area of study better than they did. Some PIs shared only the bare minimum information to avoid scrutiny of the details. The reviews that weren’t exempt took a long time because volunteer reviewers had little incentive to prioritize them. Miscommunication between PIs and reviewers about highly technical or context-specific issues further hampered the RRC’s effectiveness.

 

IPA's Approach to Methodological Review Boards

 

So how do we solve the problem Lakens raises? We do not need a research methods police for every single study, but research institutions should provide research methods review support and ensure such support is delivered in a way that is fast, efficient, and works for all research. This should be a central part of the institution’s research infrastructure, just like grants management or communications support—a form of friendly but rigorous internal quality control. Research projects themselves should pay for it (as they do with IRB review), but they should want it and see it as a valued service. The institutions that bear the cost and reputational risk of studies failing to produce usable evidence should require this kind of helpful review. Funders and IRBs should insist on it. They should create the ability to intervene early in the life of a study before time or money (or both) are wasted. The more resources or reputations at risk, the greater the scrutiny.

Over the past two years, IPA has rolled out a flexible, collaborative technical research review process that has these properties. We learned that, by implementing the following steps, it is possible to address Lakens’ concerns without creating undue burdens on researchers or reviewers.

Step 1: Do away with blinding. Strongly encourage both written feedback and a synchronous meeting allowing for two-way communication between reviewers and researchers. In IPA’s experience, these debriefing sessions have been extremely valuable for both reviewers and PIs, reducing confusion and miscommunication in just a single one-hour conversation.

Step 2: Establish a two-step scheduling system. The first step is to get the review on the calendar even before the design is ready to be reviewed. This makes it possible to assign a reviewer who can be available when the design is ready to be reviewed and turn it around the same day or the next. Scheduling the review and assigning the reviewer so early in the process reduces delays and enables the reviewer to avoid the competing demands (e.g. writing proposals, and submitting course grades) so common to the profession.

Step 3: Pay the reviewers. Some have proposed employing full-time reviewers. But the best reviewers are often researchers themselves and might be reluctant to abandon their research to become full-time reviewers. At IPA, our internal researchers take turns serving as reviewers. Their work as reviewers is charged to the grant that funds the project, or to whatever account is supporting the proposal development. For IPA’s researchers, spending time as a reviewer has proven intellectually stimulating and rewarding.

Step 4: Establish common ground. It’s important that researchers and reviewers understand they are pursuing a common goal: high-quality, rigorous research. Ideally, the reviewer should be selected based on their expertise, but they do not have to be an expert on all aspects of the study. They can still provide a fresh perspective, which is often the board’s most valuable contribution. Reviewers represent the institution and help protect its reputation and mitigate operational risk. In some cases, they have to deliver the tough news that the study should not move forward, but that is rare.

At IPA, we are still ramping up this process and implementing it throughout the organization. But initial results are promising. Lakens noted that “The goal is not to gatekeep, but to improve.” This is exactly the spirit we try to follow with IPA’s technical research review process. Based on our experience, the more flexible and collaborative approach we’ve taken works better to achieve this than methodological review boards have in the past.