A Call for Structured Ethics Appendices in Social Science Papers

A Call for Structured Ethics Appendices in Social Science Papers

Template G Content Blocks
Sub Editor

Social science researchers engaged in primary data collection often consider a range of ethical issues during the planning of their research, but such considerations are rarely articulated upfront or in subsequent articles generated from the research. We believe that building explicit steps for this can lead to better research, better communications about research, and thus better impact of research as well. To facilitate this, we propose authors include a structured ethics appendix in working papers and published online appendices, and we provide a framework below for guidance.  

As many realize, institutional review boards (IRBs) aim to protect research participants but ultimately examine a narrow set of ethics issues. Many ethics issues ought to be discussed that do not fall under the purview of an IRB. But we do not actually have “ethics review boards” that have a mandate over all ethical research issues. Many ethical issues, ones not covered by IRB, are left to self-regulation and peer review during grant applications and publications. (Read more about the role of IRBs, in particular the IPA IRB, here).

This is less than optimal for two broad reasons: First, there are important ethics issues to how researchers engage with and ultimately influence outside institutions, policymakers, and people. Active consideration and discussion of these matters can help form stronger norms and adherence to those norms. Second, social science writing style is often fairly terse on methods issues, in particular information relevant for an understanding of ethics issues. Many journals require an IRB approval number and nothing more. This leaves readers to fill in the unstated information with potentially incorrect assumptions. The short-form nature of some public discussions can then exacerbate the problem.

This is why we propose a “structured ethics appendix” for papers and projects that engage in primary data collection. This is not a checklist; we know of no specific recipe that can provide “ethical approval” of a research project. Instead, it is a structured, but brief, set of 14 questions which hopefully provide researchers a concise and consolidated platform to spark thoughtful consideration of major ethical issues and to provide relevant information to readers.

A few questions that have come to us, to clarify. (1) Although we would not be opposed, we do not imagine journals requiring this. We propose this be a voluntary step in research planning and an appendix that researchers could put at the end of their working papers, and include in any online appendices for published papers. (2) We will be writing an article to accompany this structured appendix, that will explain some of the larger motivation and issues, our take on these ethics issues. We envision that article being helpful for explaining some of the issues behind each of the questions in the appendix, but we also include in small italics brief explanations of each of the questions.

We are posting this here in hopes of getting feedback from the community. We have also prepared a few samples of what this could look like, from our own projects as well as others’. 

If you have any comments or ideas on how we can improve this, please email us at karlan@northwestern.edu and/or christopher.udry@northwestern.edu

The framework is below. And we wrote four samples as well: Tying Odysseus to the Mast, A Better Vision for Development, Randomizing Religion, and Agricultural Decisions after Relaxing Credit and Risk Constraints.

We look forward to hearing what people think.

Dean & Chris

Ethics Structured Discussion Appendix

1. Equipoise

Is there equipoise?

Clinical equipoise means there is genuine and meaningful uncertainty or disagreement amongst stakeholders on the outcome of the research (e.g., cost-effectiveness of an intervention relative to alternatives). Note this could be cut and paste from the main paper, the section in which one describes the existing literature on this topic and where this paper adds value to that literature.

2. Role of Researchers with Respect to Implementation

Are researchers “active” researchers, i.e. did the researchers have direct decision-making power over whether and how to implement the program?

If YES, what was the disclosure to participants and informed consent process for participation in the program? Providing IRB approval details may be sufficient, as in #2 above, but further clarification of any important issues should be discussed here.

If NO, i.e., implementation was separate, explain the separation.

A researcher should be considered “active” if for example the implementing staff are employed by an institution at which the PI is employed, and the staff report either directly or indirectly to the PI at this institution with regard to this project. Or if researchers control funding for implementation, and as such have direct decision-making power over key implementation decisions.

Some key factors to describe that help illuminate whether the researchers are “active” or not (here “researchers” are defined as the PIs and the staff that report directly or indirectly to the PIs): Did researchers directly provide any of the interventions, or parts thereof, to participants? Did researchers interact directly with participants and implicitly endorse one or more of the interventions?

3. Potential Harms to Research Participants from the Interventions or Policies 

Does the intervention, policy or product being studied pose potential harm to participants? Related, are participants particularly vulnerable? If yes to either, what is being done to mitigate such risks?

It may be important to consider whether the researchers are “active” (see above) or not for this discussion. If the researchers are “active”, then they are responsible for the potential harms, and thus a robust discussion is appropriate. If the researchers are not “active”, then while they may not be responsible for potential harms, a discussion of this would be appropriate here.

There will almost always be some potential harms if nothing else because of complementary investments such as time that participants in an intervention necessarily redirect from one activity to another. Quantifying these risks and complementary investments may be difficult ex-ante, but a discussion of what they are here would help the reader assess their likely importance relative to the potential benefits of the tested intervention. Also note that measuring any harms ex-post may be the exact reason for the study, particularly when the intervention is common.

4. Potential Harms to Research Participants from Data Collection (e.g., Surveying, Privacy, Data Management) or Research Protocols (e.g., Random Assignment)

Are data collection and/or research procedures adherent to privacy, confidentiality, risk management, and informed consent protocols with regard to human subjects? Are they respectful of community norms, e.g., community consent not merely individual consent, when appropriate?

Example of sub-questions to consider as part of the broad question: Are there any risks that could ensue because of the data collection process or storage, e.g. discomfort to being asked certain questions or breach of confidentiality? If so, what are the mitigation strategies? Are there costs to the participant for the data collection process, such as their time, and if so, what is the strategy or rationale for offsetting this cost? 

Because these are all issues covered by most IRB processes, a sufficient explanation for a “yes” response may be to provide the IRB approval numbers for all IRBs that have approved the project. However, if there are particular issues that are important to discuss, please do so here.

5. Potential Harms to Nonparticipants

Does the intervention, policy or product being studied pose potential harm to nonparticipants? Related, are potentially affected non-participants particularly vulnerable? If yes to either, what is being done to mitigate such risks?

If risks to nonparticipants exist, discuss the mechanisms through which the risk arises from the study and provide an estimate of the magnitude of the risk and the probability of harm. 

6. Potential Harms to Research Staff

Are there potential harms to research staff from conducting the data collection that are beyond “normal” risks?

This could include, e.g., exposure to political violence, exposure to unusual levels of a communicable disease, mistrust due to lack of perceived lack of community consent, or emotional wellbeing from surveying about difficult subject matters. This would not include, e.g., traffic accidents.

7. Scarcity

(Relevant for randomized controlled trials only): Did the inclusion of random assignment to treatment and/or control arms cause a change in the expected aggregate value of programs or products delivered? 

A common ethics question is whether the control group would have received a program, service or product had it not been for the research. Most often “scarcity” is the appropriate answer, that budget and other non-research considerations are the critical factors limiting scale, and as such the random assignment merely influenced who got which program, service, or product. If this is the case, then answer “no,” and if not, then explain here.

8. Counterfactual Policy

(Relevant for randomized controlled trials only): Had the research not been conducted, is the counterfactual situation that would have happened instead predictably better for participants than what they actually received in any of the arms of the study?

This is related to the “scarcity” question but broader. For example, if treatment arms are cost equivalent, is there one that would have been implemented throughout, had there not been random assignment? More broadly, prior to implementation, were researchers aware of any specific programs, services, or policies that would have been implemented in the absence of the research? If so, an explanation here of what would have happened, as best as the researchers can guess, and the expected differential value (including reasons for uncertainty about the expectation) to participants of the foregone policy versus the treatment and control arms should be discussed here.

9. Researcher Independence

Were there any contractual limitations on the ability of the researchers to report the results of the study? If so, what were those restrictions, and who were they from?

This could include, for example, approval of release of the paper and restrictions on data release, but does not include things such as a “comment period” during which interested parties have a right to review and provide comments prior to release but not to control the outputs of the study.

10. Financial Conflicts of Interest

Do any of the researchers have financial conflicts of interest with regard to the results of the research?

We define financial conflicts of interest as that used by the researcher’s institutional (e.g., their university) guidelines.

11. Reputational Conflicts of Interest

Do any of the researchers have potential reputational conflicts of interest? 

We define a reputational conflict of interest as one in which prior writing or advocacy could be contradicted by specific results pursued in this study, and such contradiction would pose reputational risks to the author.

12. Feedback to Participants or Communities

Is there a plan for providing feedback on research results to participants or communities? If yes, what is the plan? If not, why not?

Engaging in post-study feedback is a way of acknowledging the agency of participants and communities, and is thus a desired practice. However, it may be impractical due to costs, timing, challenges communicating the results, or potential harms if such communication may itself change behavior in undesirable ways.

13. Foreseeable Misuse of Research Results

Is there a foreseeable and plausible risk that the results of the research will be misused and/or deliberately misinterpreted by interested parties to the detriment of other interested parties? If yes, please explain any efforts to mitigate such risk.

14. Other Ethics Issues to Discuss

Are there are any other issues to discuss or issues that relate to a combination of the above? If so, please discuss here.


 

November 12, 2020