What’s the best way to help recovering female addicts stay clean when they get out of prison? A good place to start researching that question might be to talk with people who have been through the experience. Or at least that’s what Harvard graduate student Kimberly Sue thought when she set out to interview female prisoners in Massachusetts with a history of opiate addiction.
But before Sue could even approach any of the women she hoped to interview, she needed to get permission from the Massachusetts Department of Public Health, the Department of Corrections, and, crucially, the Harvard Committee on the Use of Human Subjects, also known as an institutional review board, or IRB. And it was at Harvard that she faced her greatest hurdles.
The Harvard IRB wanted to put all kinds of restrictions on Sue’s work. It didn’t want her to talk to women who had dealt with trauma or active mental illness. (That would have excluded just about all the women in her study.) It wanted her to get her informants’ signatures on written consent forms with complex legal language that, as Sue soon discovered, the women were unable to understand. It fretted that Sue would advise women to terminate pregnancies. And it forbade her from observing women outside of such controlled spaces as hospitals and jails, as much for Sue’s safety as for that of the women she studied.
Much of this struck Sue as bizarre. She had worked in tough neighborhoods before, in Durban, South Africa, as well as North Philadelphia and Bedford-Stuyvesant, not to mention local jails, so she knew what kinds of risks she was facing. More importantly, she knew the women she wanted to study, and what kinds of risks they were facing. While the IRB feared that Sue’s questions about drug use would spark cravings in recovering addicts, the former addicts themselves knew that friends, family, and familiar places were more likely to spark cravings than a question from a graduate student.
Sue was ready to acknowledge her ethical responsibilities as a researcher, but the IRB seemed detached from reality and left her feeling, she said, “alienated, oppositional, and isolated.” She was eventually able to win the approvals she needed, but the experience was searing enough that she subsequently wrote an essay in which she warned others about the ways “IRBs can actually impede, slow down and alter our research.” Indeed, anyone who cares about universities’ ability to study society and train researchers should worry about the obstacles IRBs can impose.
IRBs exist at just about every university or hospital that gets federal funding for research involving human beings, whether that means medical studies or just phone interviews. (The same rules do not apply to for-profit corporations, except drug companies.) The boards themselves are composed of a mix of researchers, administrators, and some members who are not affiliated with the institution and do not conduct research themselves, but who are supposed to represent the voice of the community. Along with the boards, larger institutions have IRB offices with staffing that can range from just three or four administrators to dozens, with annual budgets easily exceeding $1 million.
Regardless of the number of administrators employed, the IRB machinery exemplifies the way federal regulation can combine with institutional self-preservation to hamper a core mission of the university. The university exists to ask questions, yet IRBs forbid some questions to be asked. The university exists to spread truth, yet IRBs insist on altering facts in published accounts. (See, for example, our story about the career aspirations of Harvard and Stanford students. The story contains intentional inaccuracy inserted at the insistence of an IRB that was concerned to protect the identity of these students, even though most had no problem with being quoted by name.)
And even when researchers persevere, the IRB can function as a drag, limiting what they can achieve. A 2012 study found that preparing and complying with IRB protocols (as well as analogous protocols for animal research) “were by far the most time-consuming” administrative burdens faced by researchers with federal grants. While this survey focused on researchers in all fields, individual social scientists endure similar friction. For universities to do their best work, they need a better system.
The federal government invented the IRB with excellent intentions. In the mid-1960s, the Public Health Service learned that a few medical researchers were doing some troubling things with their grants. Two doctors had injected cancer cells into elderly hospital patients. They were sure that their bodies would reject the cancer, so they didn’t bother asking their permission or telling them what they were doing. Another had tried—unsuccessfully—to transplant an animal kidney into a human being, without seeking anyone’s approval.
To rein in such cowboys, in 1966 the surgeon general required grant recipients to secure “an independent determination of the protection of the rights and welfare of the individual or individuals involved.” Instead of a federal agency making the determination, each research hospital or university would have to establish its own IRB, a local office responsible for compliance with federal rules.
The need for such bodies appeared even more acute a few years later, when investigative journalists broke the news of additional abuses by federal researchers. At the University of Cincinnati, a researcher had used Army funds to judge the effect of nuclear combat on soldiers by exposing dozens of cancer patients—most of them black and poor—to massive doses of radiation. And the Public Health Service had, for decades, monitored the health of hundreds of African Americans with syphilis, deceiving them with colored aspirin tablets into thinking they were receiving medical treatment. More than any other research, this Tuskegee Syphilis Study, as it became known, led to congressional attention and, in 1974, to a statutory requirement that Public Health Service grants for human experimentation receive IRB review.
Policymakers figured that if IRB oversight made sense for medical research, it must also be the right tool for governing social science interviews, surveys, and ethnography. Anthropologists, sociologists, and political scientists disagreed, explaining that their work was less predictable—and less hazardous—than medical experiments and should not be governed by the same procedures. As one sociology department warned in 1979, “These proposals display a profound ignorance of social science research requirements, techniques and methodology … and an unbelievably arrogant disdain for First Amendment protections of free speech.”
Nevertheless, federal regulators proceeded on their course, adopting a definition of human subjects research that includes not only medical and psychological experimentation but almost every form of systematic interaction between researchers and the people they study. In other words, altering someone’s DNA in a gene therapy trial or asking their taste in movies fall into the same broad category.
To be sure, even simple interviews can pose real ethical challenges. Some social scientists deceive their subjects in an effort to see if they harbor racial or gender biases that they would not admit (or even perceive) if questioned directly. More commonly, researchers accompany questions about sexuality, criminal behavior, or other sensitive topics with promises of confidentiality. In a recent case at Boston College, such promises proved inadequate when the British government secured recordings of former Irish Republican Army operatives who had spoken of an unsolved murder. A good oversight system would teach researchers techniques for keeping such material confidential, or at least for avoiding offering assurances that can’t be kept.
Unfortunately, in practice IRB review too often turns into a farcical imitation of ethical deliberation, as boards obsess over typographical errors or wildly improbable dangers while ignoring empirical evidence about the kinds of research that have caused trouble in the past. Like other forms of over-bureaucratization, they can do real damage to universities.
Most obviously, IRBs hinder research through simple delay. Even the most basic reviews can take weeks, while a more complex review by the full IRB can take many months. Since IRBs often only meet monthly, multiple revisions make the process drag on. Sue waited half a year for clearance. When Northwestern University professor Brian Mustanski sought clearance for a foundation-funded study of young members of sexual minorities, multiple rounds of IRB review consumed ten of the twenty-four months of funding. Fortunately, Mustanski was able to persuade the IRB to retract its demand for parental permission; had he confined his study to youth willing to involve their parents, it would have skewed his results.
Other researchers are less successful. Graduate student Sarah Young tried to get approval to interview Chinese mothers about their views on the one-child policy. The women were willing, and she had scholars at a Chinese university willing to host her. But after three months of unreturned emails and phone calls, and finally an in-person meeting with an administrator who belittled her knowledge of Chinese history, Young abandoned her plans to travel to China or conduct interviews.
When IRBs do respond, they can set conditions on researchers’ methods. Some of the most striking restrictions concern research about pressing public concerns, just the sort of thing that we might want to encourage scholars to pursue. Sociologist Jack Katz has tracked IRB interference with studies of Mormon sexuality, university admissions practices, and labor conditions at Indian casinos. Though the IRBs no doubt claimed ethical concerns, the effect is to block politically sensitive research.
Geographer Joshua Inwood wanted to interview people who had helped Greensboro, North Carolina, deal with the unhealed wounds left by a fatal 1979 attack by Klansmen and Nazis on a communist-led demonstration. His IRB chair summoned him to a meeting with the university general counsel, who wanted to vet all of Inwood’s manuscripts before he sent them out to peer review. Inwood resisted this demand, but he was forced to agree not to identify public officials by name, lest the airing of their views lead to electoral defeat. His published scholarship leaves out important quotations from the interviews he conducted, in favor of statements from public hearings and newspapers. “I’ve got this notion of American democracy, where public officials should be accountable for their views,” Inwood laments. “That shouldn’t be a harm; it should be part of the process.” Dreading a career in which his research would be constrained by an unreasonable IRB, Inwood moved to a different university.
Even innocuous studies are at risk. A pair of researchers who wanted to survey music education majors about why they had chosen that profession faced so many burdensome requirements that their sample was reduced from several thousand to 250. The same thing happened to a doctoral student who sought to survey college professors about the knowledge and skills they hoped to impart. Forced by the University of South Florida IRB to seek permission from every university whose faculty she wished to contact, she struggled simply to get messages returned by many of those other IRBs.
At the City University of New York, it took a librarian five months to get approval to ask students about their study habits and their use of the library. Unsurprisingly, she reported, “fear of the complexity of IRB regulations has led librarian researchers I know to simply avoid any research involving library users.”
If a researcher can complete her study, an IRB may still insist that she not report everything she learned. An IRB forbade anthropologist Scott Atran from identifying the terrorist networks he was studying, so that rather than write about, say, Hamas, he would have to refer to “Group A.” This issue of the magazine features work by Amy Binder, whose important research (and subsequent book) on campus conservatism was degraded by foolish IRB requirements. Forced to keep secret not only the names of the students they interviewed but even the sites of their work, Binder and her coauthor could not quote the students’ writing in campus newspapers or explore the influence of specific faculty members. And by keeping real names out of the final book, the IRB prevented future journalists and historians from understanding the early careers of the next generation’s Karl Rove or Dinesh D’Souza.
While faculty and advanced graduate students are the researchers most likely to face IRB troubles, undergraduates are affected as well. Some professors have stopped assigning interview projects, judging it not worth the bother. And ambitious students writing capstone theses must plan well in advance to get approval before their senior year. Whatever the outcome of an individual project, IRB rules can make both teaching and research more timid.
Balancing the interests of researchers, research participants, and society at large is not easy. As Harvard IRB chair E. L. Pattullo wrote in 1978, “There is no sieve, consistent with the maintenance of a healthy research enterprise, which will ensure against every possibility of subject abuse.” Short of prohibiting or permitting all research, any system will impose too much scrutiny on some studies and too little on others.
Still, we can do better than we are doing now. The simplest, boldest reform would be to restrict IRB jurisdiction to a narrow range of research projects. Finland, for example, requires researchers to seek ethics review only when they intervene “in the physical integrity of subjects,” deviate from informed consent, study children under the age of fifteen, use “exceptionally strong stimuli,” or risk subjects’ long-term mental health or security. All of these terms would require elaboration, but a study like Sue’s—based on interviews with consenting adults—would likely proceed without any mandatory review.
Closer to home, Canada has shown what reforms are possible when the rule-making process is made more inclusive. In 1998, Canada imposed a system much like that in the United States, resulting in howls of protest from researchers and what sociologist Will van den Hoonaard has called “increasing homogeneity and impoverishment of the social-scientific methods.”
But unlike U.S. regulators, Canadian rule makers listened to these critics and included them in a revision process. New guidelines, issued in 2010, include an entirely new chapter on qualitative research, acknowledging that since qualitative researchers do not plan experiments in advance, ethics boards cannot demand the same detailed protocols common in medical studies. The chapter also notes that anonymity isn’t for everyone, whether because narrators wish to be named or because people in power deserve to be held to account for their actions.
Canada also guarantees its researchers the right to appeal decisions of ethics boards, and endorses academic freedom, including “freedom of inquiry, the right to disseminate the results of that inquiry, freedom to challenge conventional thought, freedom to express one’s opinion about the institution, its administration or the system in which one works, and freedom from institutional censorship.” That may sound obvious, but it is the envy of U.S. scholars, who see IRBs as failing to consider such freedom in their quest to protect from every imaginable harm.
And in the United States, the federal government has finally at least acknowledged the problem. In 2011, the Department of Health and Human Services, along with the Office of Science and Technology Policy, conceded that “overregulating social and behavioral research in general may serve to distract attention from attempts to identify those social and behavioral research studies that do pose threats to the welfare of subjects and thus do merit significant oversight.” More generally, they accepted the need to reduce “burden, delay, and ambiguity” for investigators in all areas of research. Scholarly associations were quick to agree, as was a National Research Council study, which hoped to “increase the efficiency and effectiveness of human subjects’ protection, while reducing burden overall.”
One key proposal in the 2011 announcement called for a shift from reliance on prior review of nearly every protocol to a system in which, for low-risk types of research, “researchers would file with their institution or IRB a brief registration form (about one page long) that provides essential information about the study.” These forms could be audited periodically to make sure researchers were playing fair, but researchers would not have to wait weeks or months before starting their work.
A complementary proposal from the National Research Council would be to “build a stronger evidence base” about the effects of participating in research. That reform would help researchers like Sue, whose IRB knows little about the real-world ethical dilemmas that ethnographers face, and tends to base its decision on what bioethicist Ezekiel Emanuel has termed “gut reactions … which is worthless.”
Unfortunately, regulators have not acted in the three years since the federal proposals were published. Part of the problem may be resistance from Public Responsibility in Medicine and Research (PRIM&R), a nonprofit that certifies—and thus tends to represent—IRB administrators whose career tracks depend on continued IRB jurisdiction, and that has enjoyed close contact with federal regulators. In its comments at the time, PRIM&R argued against empowering researchers to determine by themselves what studies require review. If there’s to be a one-page form, PRIM&R wants its members checking off on it.
Another reason for the holdup is that so many different federal agencies now share IRB regulations (known as the Common Rule), and each would need to agree on any reform. As a Department of Veterans Affairs research official warned in early 2013, “Given the current political climate and the often divergent interests of the seventeen agencies that adhere to the rule, meaningful systemic modernization of the Common Rule is not likely to occur any time soon.” By summer 2014, rumors spread that the next step, a proposed revised regulation, might appear within months—but also that it could make things worse, not better.