// Read more here: // https://my.onetrust.com/s/article/UUID-d81787f6-685c-2262-36c3-5f1f3369e2a7?language=en_US //
You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

The Dawn of Antidepressants

Have antidepressant drugs ever truly worked—and does that matter?

Joe Raedle / Getty Images

A pendulum—slowly swinging between the extremes of enthusiasm and skepticism—is often used as a metaphor for our orientation towards particular medical therapies. For instance, in the past decade or so, a renewed debate has emerged on a topic that many once saw—and others still see—as sound and settled science: the use of drugs to treat depression.

ORDINARILY WELL: THE CASE FOR ANTIDEPRESSANTS by Peter D. Kramer
FSG, 336 pp., $27.00

There really is no case—to my mind anyway—against antidepressants on some sort of philosophical ground that sees the alleviation of mental suffering via chemical means as somehow unnatural or unnecessary. Depression is a source of profound and widespread suffering, and if that suffering can be eliminated or lessened through safe medicines, it should be. The contemporary debate over antidepressants—or at least the one that matters—is not about whether we should use medicine to treat depression. Instead, it’s about whether currently existing antidepressants are, in fact, worthy of their name. The sheer scope of depression—and the massive scale of antidepressant prescribing—makes this is a debate of enormous consequence.

In his new book Ordinarily Well: The Case for Antidepressants, psychiatrist and writer Peter D. Kramer provides a forceful rejoinder to this growing tide of skepticism. Kramer has been a prominent voice on matters of mental health for some time, beginning with his 1993 book Listening to Prozac. His new book takes a unique approach: though at times passionate and personal, it is mostly a detailed excavation of the thorny landscape of the empirical evidence for antidepressant medications. However one ultimately sides in this debate, the evidence is much muddier than any of us would like. 

“I have noticed that excessive bleeding is dangerous for most madmen,” one hospital administrator noted in late eighteenth century France, “because it causes the illness to degenerate into imbecility. Inmates often arrive …. reduced to such a state [by bleeding] that they are even unable to eat.” This description of psychiatric therapy via bloodletting was no aberration. Dora Weiner, in Citizen-Patient in Revolutionary and Imperial Paris (the source of this quote), writes that, traditionally, treating mental illness meant eliminating insalubrious humors and fluid pressure on the brain through crude medical measures such as the induction of violent defecation, vomiting, sweating, urination, and/or bleeding.

The use of such outrageous therapeutics did not end in that era. In an article last month in the New York Review of Books (“Neuroscience and the Law: Don’t Rush In”), Jed S. Rakoff describes the zealous, grotesque embrace of first eugenics and then lobotomies for the prevention and treatment of mental illness in the first half of the twentieth century. He states that more than 40,000 lobotomies were performed in the two and a half decades after 1940, including on children and for the “treatment” of homosexuality. These sorts of atrocities, it’s worth noting, are not unique to the field of psychiatry: They can be found throughout the history of medicine.

But then the story began to change, which is where Kramer begins his book. In the 1950s, in Switzerland, a psychiatrist named Roland Kuhn developed one of the first modern antidepressant medications. Before that point, Kuhn had relied on a combination of psychotherapy and existing therapeutics. For instance, with “insulin-shock therapy,” he would administer high doses of insulin to his patients, causing their blood sugar to plummet to dangerously low levels, which would in turn induce ostensibly therapeutic seizures. He would also sometimes turn to the old uppers and downers, amphetamine and morphine.

But new drugs were emerging. The first antipsychotic drug—what we call thorazine—became available in this decade. However, after a brief trial period, Kuhn’s hospital couldn’t afford any more thorazine, so Kuhn turned to a Swiss pharmaceutical company named Geigy to see if it might provide a similar compound. Geigy gave him substance G22355. However, Kuhn found that G22355, in contrast to thorazine, wasn’t very effective in treating the psychosis of his schizophrenic patients; it did, however, improve their mood. So he proceeded to try it on dozens of his depressed patients. He observed marked clinical improvements among these individuals, observations he presented and published in 1957. Geigy, as it happened, only moved ahead with the production of the drug, which would be called imipramine, after one its key shareholders gave it to his wife with apparent benefit. Imipramine was the first tricyclic drug—a class of antidepressants still in clinical use today. Indeed, it has not yet been trumped by any drug since with respect to efficacy.

And so the age of antidepressants began.

The story of Kuhn and imipramine, Kramer contends, is unique. Though Kuhn’s study was flawed, it was an “experiment … of singular evidentiary value,” one which can never again be replicated, and because of this, it holds important lessons for the interpretation of modern antidepressant drug studies, the central concern of Kramer’s book. In contrast to the participants of modern randomized trials of antidepressants—who may have already been on several rounds of antidepressants and who are studied in a somewhat artificial setting—Kuhn’s patients were antidepressant virgins treated in a real clinical environment. “He had witnessed,” Kramer writes, “the full power of an antidepressant,” something (he believes) that modern investigators can longer do. Whether this is as important as Kramer suggests is something I’ll return to later.

From there, Kramer proceeds to tell, in a series of short chapters, the story of the other developments that together brought about the modern era of psychopharmacology. One critical advance was the development of the randomized clinical trial (RCT). Doctors, of course, had already been studying the impact of drugs on disease since the advent of medicine. They would administer a treatment, observe its effects, and conclude from these observations whether it had been successful or unsuccessful. That might sound, on face value, like a perfectly reasonable method for determining efficacy. In truth, however, this approach has grossly misled the medical profession from the time of Hippocrates to today. Based on their honest clinical observations, physicians have become deeply, sincerely convinced of the efficacy of treatments later shown to be useless and even deadly. The reality is that the course of most diseases is quite variable: some will improve with a useless treatment, while others will worsen even if they receive a highly effective one. Distilling from these varied and contradictory experiences the true “clinical effect” requires nothing less than the cold-blooded rigor of statistical analysis.

Another problem: if the decision to treat or not to treat is anything but entirely random, any effects that are observed may reflect characteristics of the individuals, and not of the drug. That is why—as Kramer emphasizes—RCT’s were such a milestone in medical history. An RCT is an experiment conducted on human beings, with treatment determined by a coin flip (or something like it). When they are performed correctly, they can bring us closer to the truth about a medicine’s effect than is possible through observation alone. Yet as crucial as RCTs are to the development of medical therapeutics, they are a very recent innovation: as Kramer notes, the age of the clinical trial only began in 1946, with the British Medical Research Council’s landmark trial of streptomycin for tuberculosis (notably, it worked). The key element of this trial was that it randomized participants to treatment or no-treatment; later trials, Kramer notes, would also include blinding as well as placebos for the no-treatment group.

This history is important, because a good portion of Kramer’s book, though interspersed with interesting clinical anecdotes and poignant reflections, grapples with the question of how we should interpret modern antidepressant clinical trials. A few other developments, however, were also needed before our era of antidepressant therapy could begin. In order to determine if antidepressants are effective in a clinical trial, their effect must somehow be objectively quantified. Kramer describes how, in the 1950s, a British psychiatrist with statistical expertise named Max Hamilton formulated a depression scale—still employed in studies today as the “the Hamilton Depression scale”—that quantified various symptoms of depression, with a response defined as a reduction in one’s score by half. By the 1970s, when Kramer began his medical training, randomized trials of imipramine, relying on improvements in Hamilton scores, had demonstrated positive results. Although, Kramer says, an older guard trained in Freudian psychoanalytic therapy still resisted the use of these medications, they were increasingly recognized and utilized, to—in his eyes—often marvelous effect.

1987 was a final milestone that he makes note of: this was the year when the FDA approved the first selective serotonin reuptake inhibitor, Prozac. Prozac seemed to have similar efficacy to imipramine, but a better side effect profile. If you or someone you know is taking an antidepressant, it is most likely a drug of this or a similar class. Kramer sees the introduction of antidepressant medications as altering the very landscape of mental health in America. What he calls “end-of-the-line” depression—severe, end-stage, intractable, hopeless depression—became a rarity in his eyes, though he admits empirical evidence is lacking. “[M]y impression is that end-of-the-line depression is less common in part because—haphazardly, with many a case missed altogether—we treat depression early, generally with antidepressants.” Modern psychopharmacology had arrived.

Yet, it wasn’t long before the pendulum again began to swing. In 1998, without warning, two researchers fired a shot across the bow of the psychiatric mainstream. This came in the form of a paper whose title echoed Kramer’s own book from five years earlier. “Listening to Prozac but Hearing Placebo: A Meta-analysis of Antidepressant Medication,” was, according to Kramer, “the opening salvo” of today’s still unfolding debate about antidepressants.  

Clinical trials, I’d argue, touch on all of our lives in profound ways, though elucidating the finer points of their design, analysis and interpretation—as Kramer does—is not an easy task. Optimally, clinical trials are both well-designed and massive, in which case they sometimes produce definitive results. In such cases, repeating the trials becomes worthless and often grossly unethical. However, psychiatry has few, if any, of these “gold standard trials,” as Kramer acknowledges, and for that reason, the field relies on “meta-analyses.” These are essentially studies that pool together data from multiple smaller studies to produce more definitive results. “Listening to Prozac But Hearing Placebo” was one such meta-analysis (albeit a highly controversial one), performed by Irving Kirsch, a psychologist interested in placebo science, then at the University of Connecticut, together with psychologist Guy Sapirstein.

Now, as Kirsch has argued in a 2014 review (“Antidepressants and the Placebo Effect”), the fact that placebos might have an effect on depression should not be seen surprising. Depression, he contends, is frequently characterized by a deep sense of hopelessness, and so “the mere promise of an effective treatment” might help to replace “hopelessness with hopefulness—the hope that one will recover after all.” This was, in any event, he and Sapirstein’s thinking when they set out to investigate how placebos might affect depression.

Their study was a meta-analysis that combined the results of 19 placebo-controlled trials, each of which looked at the effect of some drug on depression. In analyzing the data from these varied studies, they found that those who took drugs had a 1.55 overall standard deviation improvement in their depression scores (measuring improvement in standard deviations is a way to standardize these changes), consistent with a substantial response. However, those who took placebo also had a substantial improvement, with a 1.16 standard deviation rise in scores. They thus write: 

Subtracting mean placebo response rates [1.16] from mean drug response rates [1.55] reveals a mean medication effect of 0.39 SDs [standard deviations]. This indicates that 75% of the response to the medications examined in these studies was a placebo response, and at most, 25% might be a true drug effect … [this] means that for a typical patient, 75% of the benefit obtained from the active drug would also have [been] obtained from an inactive placebo.

Now this is not, necessarily, as damning as it might sound. As Kramer notes, from this result alone, we cannot conclude that there is a powerful (or indeed, any) placebo effect at work: it may simply be that people improve over time for various reasons, including the natural history of the illness. Antidepressants help those who don’t otherwise improve.

However, Kirsch and Sapirstein had some other findings that suggested that there was indeed a real placebo effect at play, at least in their view. For instance, they found that the effect was basically the same regardless of the type of drug used, meaning there was no major difference whether subjects received traditional antidepressants (e.g. imipramine or Prozac) or “other medications” (e.g. thyroid hormones, benzodiazepines, barbiturates, or lithium) that are not considered antidepressants. Whereas it could be true that these latter drugs actually had an antidepressant effect, it could also be that they caused side effects that made the individuals taking them come to believe they were taking active drugs. This, in turn, would boost the intrinsic hope-inducing effect of the placebo. This is what is referred to as an “active placebo,” a paradoxical-sounding concept that is rather fascinating. (As a side note, it’s worth stressing that for “harder” outcomes, like the growth of a tumor, there is no placebo effect, active or otherwise. Where they exist, placebo effects are, by and large, confined to subjective outcomes). But in short, Kirsch and Sapirstein hypothesized that the relatively modest effect seen by antidepressants—which have side effects that can be detected by the patient—might actually be nothing more than the incremental effect of an active placebo over an inactive placebo.

Kramer rebuts these highly provocative claims with a wide range of arguments. The “active placebos” used in the studies all have potential antidepressant effects of their own, he argues (especially when, as in one study, they were combined with an antidepressant). Moreover, the placebo arm of clinical trials, as he describes across a number of chapters, is not a real placebo effect, but is instead reflective of a host of benefits that accompany the experience of participating in a clinical trial. To document this, Kramer visits a for-profit clinical trial testing center that is frequently used in modern randomized trials of psychiatric drugs. He understandably finds the experience rather disconcerting.

As he notes, it is considered unethical to pay people too much to participate in a clinical trial (as this is considered coercive). As a result, the relatively modest reimbursements for participation in these trials winds up attracting only economically disadvantaged individuals. In addition to the money, he describes how the host of fringe benefits that come from participation—like transportation to and from the center in the company of others—confers real quality-of-life improvements for these individuals (if only temporarily):

For the duration of a trial, participants enjoy higher income, richer social contracts, attention from doctors and nurses, access to transportation, time in an attractive setting, structured days, and a sense of purpose. In the bus, talk turns to cash gifts given to adult children. That’s a luxury the extra income affords, the ability to be generous. Even on placebo, these patients ought to get better.

Collectively, he argues, these benefits amount to a weak form of psychotherapy, and as a result, the placebo arm is not really a placebo arm: “In antidepressant trials as they are run today,” he writes, “the contrast is not between dummy pills and active pills. It is between psychotherapy plus dummy pills and psychotherapy plus medication.”

The point is, I suppose, reasonable, but then again, does it really matter? In Kramer’s own practice, he generally doesn’t prescribe drugs outside of the confines of psychotherapy. In my mind, what matters is the additive effect of antidepressant drugs to some sort of therapy: if the drugs have no added effect, then why shouldn’t the money we spend on antidepressants instead go towards more talk therapy and social support for disadvantaged people? Still, even in Kirsch and Sapirstein’s 1998 study (and later studies by Kirsch and others), antidepressants do have some “added” effect: it’s just far more modest than what Kramer believes he sees in clinical practice.

At the end of the day, Kramer had thought that the Kirsch and Sapirstein study would prove to be one quirky, quickly forgotten study. “I had seen it as a rearguard action,” he notes. On the contrary, “[i]t was in the vanguard, the start of a resurgence of doubt about antidepressants.”

This “resurgence of doubt,” however, would ultimately stem less from concern over the placebo effect, and more from revelations about the corrupting influence of corporate greed.

Imagine, for a moment, a friend who boasts successes—whether in cards or in love, in stock picks or in sport fishing—while leaving out the losses and the lonely nights, the busts and empty buckets. Depending on how well you know him or her, you might come to a rather inflated sense of your friend’s talents. Now imagine that this individual is actually an enormous multi-billion dollar business running clinical trials on antidepressants, and that it was selectively publishing successful trials, while kicking the less-successful ones under the carpet. You might similarly come to have an inflated sense of the potency of the drugs it was producing.

That’s basically what happened over the last few decades, though it took a Freedom of Information Request to figure it all out. Kramer discusses two studies, both published in 2008, that used data from studies that had never been published to shed light on the real efficacy of antidepressants. Kirsch was the lead author of one (“Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration”), published in the journal PLOS One. The other, by Erick H. Turner and colleagues, (“Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy”), was published in the New England Journal of Medicine. The Turner study explicitly examined the extent to which publication bias affected the “apparent efficacy” of antidepressants, and it made headlines. Briefly, these investigators obtained trial data submitted to the FDA for 12 antidepressants approved between 1987 and 2004. Overall, of the 74 studies they found, almost one third had never been published. They then classified the studies based on how the FDA had reviewed them: “positive” (the drug worked), “negative” (it didn’t), or “questionable” (somehow mixed). They found that the vast majority of positive studies—37 out of 38—were published. On the contrary, of the 36 negative and questionable studies, “3 were published as not positive, whereas the remaining 33 either were not published (22 studies) or were published, in our opinion, as positive (11) and therefore conflicted with the FDA’s conclusion.” This is disturbing, to say the very least. Selective publication fundamentally distorts our perception of whether the drugs work at all, much less how they work. Still—and as Kramer underscores—the study still found, at the end of the day, that the drugs were effective, just substantially less so than what one would believe on the basis of the published literature.

The 2008 study by Kirsch and colleagues didn’t compare unpublished and published study, but instead simply included both together in order to determine, more accurately, the actual effect of antidepressants. Ultimately, what they found (again, drawing on study data arrived at through a Freedom of Information Request) was that there was a statistically significant effect of antidepressants but that it wasn’t very impressive (indeed, it was below the standard considered to be of “clinical significance” by the British National Health Service), though this depended on the severity of depression. This last finding—that the effect of antidepressants depended on the severity of depression—was confirmed in another widely-reported meta-analysis by Jay C. Fournier and colleagues that appeared in 2010 in the Journal of the American Medical Association (also discussed and critiqued by Kramer). In this study, whereas evidence for efficacy was “nonexistent to negligible” for patients with more moderate forms of depression, there was evidence that the drugs were fairly potent for those with “very severe symptoms.” (Kramer cites other studies that he contends dispel this so-called “severity hypothesis”).

It was in the context of the unfurling of these somewhat scandalous studies that the debate over antidepressants burst onto the public stage. In 2011, Marcia Angell, physician and former editor-in-chief of the New England Journal of Medicine, wrote a two-part, wide-ranging essay in the New York Review of Books that reviewed several critical books about mental illness, one of which was a book by Kirsch that built on his 2008 study. Angell, however, went beyond a discussion of antidepressant efficacy to excoriate the diffuse corruption of the psychiatric field by industry money, the expansion of psychiatric diagnostic categories to encompass ever-larger numbers of people, and the downgrading of non-drug therapies. “[B]y emphasizing drug treatment,” she notes, “psychiatry became the darling of the pharmaceutical industry, which soon made its gratitude tangible.” The same year, Kramer issued something of a rebuttal, a long article headlined “In Defense of Antidepressants” that appeared in the New York Times.

Kramer, I should be clear, is in no way defensive of Pharma’s dishonesty. Indeed, he describes no financial conflict of interests with the industry, and he disclaims their chicanery in no uncertain terms. It is clear that his support for antidepressants comes from a combination of his clinical experience and his reading of the literature. Additionally, he himself prescribes the drugs conservatively (he calls them his “cotherapists”). His belief that the drugs are improving the lives of countless millions is, in other words, genuine. However, though his central assertion, contained in the book’s title, is that antidepressants simply work “ordinarily well”—that is to say, like other effective medications—he makes them out to work extraordinarily well. “I find antidepressants useful up and down the line,” he notes. Similarly: “There are utterly untreatable depressions—but today, few.” And with the right approach, he notes, “over three-quarters of those who start treatment enjoy substantial improvement.” In short, he sees them as highly effective in bringing relief to the majority of treated individuals with basically every size and shape of depression. That’s no ordinary drug—that’s a wonder drug.

Yet meta-analyses for major depression, in contrast, including the ones discussed, consistently show “effect sizes” that are modest. Kramer disputes these effect sizes with a wide range of detailed criticisms, some of which I’ve already mentioned, and which can only be briefly and incompletely listed here: he argues that these studies over-account for the placebo effect; that their patient populations are atypical; that the clinical trial setting is artificial; that the scoring system (the Hamilton) is flawed; that the choice of trials included in meta-analyses are problematic; that the individual trials relied on by these meta-analyses have shortcomings of their own (insufficient doses of drugs, the wrong drugs); and that the doubt implanted in people’s mind by the fact that they could receive a placebo results in a psychological “lessebo” effect that lessens the actual effect of the drug. Moreover, he points to areas where he considers the evidence more robust—for instance, for a form of chronic, low-level depression called dysthymia and for the prevention of relapses—to support his overall position.

Yet, though he offers a host of objections as to why effects in clinical trials unfairly represent what happens in actual clinical practice, one can come up with a variety of counterpoints as to why effects in clinical trials could also inflate real-world effects (which is often the case in other fields). For instance, compliance with medication regimens may be higher in a clinical trial, given the structure and support provided: this would make them look better in trials than they actually are in real life.

Kramer also argues that drug trials include individuals who are not typical of most depressed people. He calls these people “immensely disadvantaged” patients, meaning that they tend to be poor or lacking in resources. They do not have an “uncomplicated depression” and so may be unusually difficult to treat. We are left to wonder where along the economic spectrum, exactly, these “immensely disadvantaged” people actually land. Not at the top, for sure, but are they true outliers, or do they account for a substantial portion of the populace in our age of rising economic inequality? And should “uncomplicated depression” necessarily be considered the benchmark?

We are witnessing, for instance, a historic opioid epidemic. A widely reported study last year demonstrated that mortality is rising among middle-aged whites, in part driven by the fact that they are abusing alcohol and drugs and killing themselves at higher rates. Racial and ethnic minorities, meanwhile, contend with unique threats to physical and mental health of their own. It’s not clear to me that the sort of non-disadvantaged people with “uncomplicated depression” Kramer is referring to are necessarily representative of nation’s mentally ill.

We do not live in Kuhn’s clinic: the efficacy of antidepressants should be judged based on their ability to improve the health of the complexly depressed, often previously treated, and frequently disadvantaged individuals of our day.

All things considered, we live in a different era than that of the eighteenth-century phlebotomists and the twentieth-century lobotomists. As compared to the therapies of the past, antidepressants are—at the very least—gentle. They have real side effects, to be sure, but they are reasonably safe and well tolerated by most. If they can effectively lessen the great distress inflicted by depression, they should be employed.

But here we are, a full half-century after Kuhn’s experiment with G22355, debating whether they actually work. Where do things stand when all is said and done? It’s important to emphasize that—at the end of the day, and even if we put aside many of Kramer’s criticisms—the prominent meta-analyses, published in top medical journals, still show modest benefits for antidepressants, even when these studies account for publication bias, and even when assembled by antidepressant skeptics like Kirsch. Kirsch may argue that this is nothing more than the effect of an active placebo over an inactive placebo. This could be true, but it is an unproven theory. Moreover, effect sizes increase in those with more severe depression, and so there does seem to be a more unambiguous role of the drugs for these patients.

More broadly, whatever we conclude about the efficacy of antidepressants, we should be doing more, not less, for the alleviation of depression. Unmet needs are rampant. Kramer notes that a third of those with major depression are entirely untreated. Real universal health care—inclusive of comprehensive mental health benefits like psychotherapy—remains a shamefully unrealized goal in the United States, something that we should continue to pursue notwithstanding shrill protestations from the liberal commentariat that it is unaffordable or impossible. Perhaps we should even consider new trials of old antidepressants. These would be large, publicly-financed, trials devoid of Pharma influence, conducted in more real world settings, perhaps including a real “active placebo” arm, so as to more clearly understand the real impact of these drugs. Finally, more attention should be paid to the social determinants of mental health—the political and economic structural factors that contribute to depression, even if by no means their only cause.

It is a rare—perhaps imaginary—individual whose life has not been negatively touched, either directly or indirectly, by depression. Neither nihilism nor complacency is acceptable in the struggle against its ravages.