You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Are Financial Conflicts of Interest in Medical Research Overblown?

Dan Kitwood/Getty Images/Cancer Research UK

The New England Journal of Medicine, the world's most influential medical journal, recently published three highly controversial essays by cardiologist Lisa Rosenbaum on financial conflicts of interest in medical research. In them, Rosenbaum argued that there is an unreasonable prejudice against researchers who have financial relationships with pharmaceutical companies, because a conflict of interest does not necessarily mean that the researcher is biased. In her view, this prejudice leads to fewer researchers working with industry, thus impeding the progress of research.

Rosenbaum was bold to write these essays, judging by the reaction—one writer called her a “conflict of interest denialist”—and she raised many worthwhile questions about such conflicts. But bad science by biased researchers is a serious problem, and she doesn’t have a solution to it. My answers to Rosenbaum’s questions are that we need more transparency in science, not less, and more scrutiny of all researchers, for all sorts of reasons.

Many readers were shocked not only by the content of Rosenbaum’s essays, but where they appeared. Pharmaceutical companies have certainly made many deceptive claims about their products, bolstered by misleading research conducted by researchers paid by those companies. Because the NEJM helps set standards for the quality of medical science—it was for many years a leading critic of industry influence—some no doubt fear that the NEJM’s defection would be a signal of the cause’s defeat.

And they believe that the NEJM should know better, especially since the journal had itself been the site of the Vioxx scandal in 2000, which served as one of the most damaging cases of a misleading medical research paper. An article about the multibillion-dollar pain medication argued that Vioxx had fewer gastro-intestinal side effects than naproxen, an alternative drug. Each of the authors had financial ties to Merck, Vioxx’s manufacturer. In a 2005 “expression of concern,” the NEJM’s editors said that the study’s authors had wrongly failed to report data on three patients who took Vioxx and suffered heart attacks. Three patients may not seem like much. But tens of millions of people were taking Vioxx, so the difference in risk likely caused tens of thousands of heart attacks and many deaths among patients taking Vioxx.

The four-year delay in that statement deepened suspicions. In an extraordinarily harsh editorial, a former editor of the British Medical Journal argued that the NEJM had known of the problem since 2001 and that the delay in correcting the record “fits with the argument that medical journals are an extension of the marketing arm of pharmaceutical companies.”

So against that backdrop, what did Rosenbaum actually argue? She makes no specific recommendation about how conflicts of interest should be handled. Instead, Rosenbaum wants us to think about the moral psychology underlying conflict of interest policies, that is, the moral attitudes and emotions on the inside that correspond to the regulations and behaviors on the outside. She argues that the medical research culture is suffused with anger toward industry, and that policies are driven not by our reason but by our emotional disgust for what we experience as the defilement of medicine.

Rosenbaum acknowledges a history of industry wrongdoing, but notes occasions when the industry was right about the efficacy and safety of products. In her view, we have no evidence and little reason to think that, on balance, the harms to patients from bad drugs have outweighed the benefits from good ones.

And Rosenbaum is perplexed that industry is singled out for suspicion. There are many non-financial interests that can motivate researchers to do bad science, but which do not trigger required disclosures. For example, if you write about Obamacare, you don’t have to disclose whether you voted for the president. Similarly, the only way to get and keep a job in science is to publish articles in highly competitive journals, which serves as another kind of conflict of interest. For instance, a graduate student named Michael LaCour won a job at Princeton by publishing fabricated data in Science Magazine, only recently discovered.

There is also experimental evidence that scientists are biased against industry. Aaron Kesselheim gave physicians reports describing the data and methods of clinical trials of fictitious drugs and asked them to evaluate the quality of the research. Some doctors read reports that said that industry funded the research. Other doctors read reports that were identical except that the research was described as not industry-funded. Physicians reading about “industry-funded” research rated those studies as less rigorous.

The upshot of this prejudice, Rosenbaum believes, is that researchers who might otherwise work with industry do not for fear of damage to their reputations. Good ideas that industry might fund are not pursued, and drugs that might benefit patients do not appear.

Is she right? Somewhat. Rosenbaum is justified in worrying about the moral psychology surrounding conflicts of interest. There is indeed bias everywhere. As the editors of the BMJ put it, “criticism of the pharmaceutical and biotechnology industry is often reflexive and unfair.” It’s wrong to treat industry scientists as if they were defiled, or university scientists as if they were pure. We should be suspicious of everyone.

However, Rosenbaum understates the case against the industry. Large and systematic reviews of the medical literature show that when industry funds a clinical trial to test a drug, that trial is more likely to report that the drug works. This doesn’t mean that industry-funded studies are always wrong. But industry bias is a fact, not a prejudice, and we must track and understand it to improve the quality of medical science. Scientific authors must disclose their conflicts of interest: We can’t track the bias unless we know who is getting paid for what.

Universal suspicion might make us cynical about science and indifferent to research fraud. But suspicion, by itself, won’t improve science. The remedy for bad science is to make science more objective by making it more transparent.

For example, a complex statistical analysis of a clinical trial involves many decisions that can materially affect the results. Scientific articles are brief and they report only a few of those statistical decisions. The rest are hidden, and we have to take it on faith that the researcher did the right thing. But we have the technology to store the data from published clinical trials online, forever. If the data were online, anyone could re-analyze them and check the authors’ results.

We should not mute our moral sensitivity to financial conflicts of interest, but rather raise our sensitivity to all possible sources of bias. The way forward is to require that industry scientists and everyone else do better science and do it more transparently.