A big problem with elite institutions is that, for years on end, people in such places can abuse their positions by saying things that aren't true, before anyone whose opinion counts notices.
A particularly clear example of this is provided by the Harvard School of Public Health, which for many years has been pushing a phony claim with great success. The story is simple: That it's well-established scientific fact that being "overweight"--that is, having a body mass index figure of between 25 and 30--is, in the words of Harvard professors Walter Willett and Meir Stampfer, "a major contributor to morbidity and mortality." This claim has been put forward over and over again by various members of the School of Public Health's faculty, with little or no qualification. According to this line of argument, there's simply no real scientific dispute about the "fact" that average-height women who weigh between 146 and a 174 pounds, and average-height men who weigh between 175 and 209 pounds, are putting their lives and health at risk. Furthermore, according to Willett, such people should try to reduce their weights toward the low end of the government-approved "normal" BMI range of 18.5 to 24.9 (the low end of the range is 108 and 129 pounds for women and men respectively).
It's difficult to exaggerate the extent to which the actual scientific evidence fails to support any of this. In fact, the current evidence suggests that what the Harvard crew is saying is not merely false, but closer to the precise opposite of the truth. For the most part, the so-called "overweight" BMI range doesn't even correlate with overall increased health risk. Indeed "overweight," so-called, often correlates with the lowest mortality rates. (This has led to much chin-scratching over the "paradox" of why "overweight" people often have better average life expectancy and overall health than "normal weight" people. The solution suggested by Occam's Razor--that these definitions make no sense--rarely occurs to those who puzzle over this conundrum). Furthermore, it's simply not known if high weight increases overall health risk, or is merely a marker for factors, most notably low socio-economic status, which clearly do cause ill health. As Adam Drewnowski, director of the Center for Public Health Nutrition and a professor of epidemiology and medicine at the University of Washington, told me, "nobody wants to talk about the 'C' word--class. Yet it's clear that social economic gradient is a profound confounding variable in all this, and one that most current studies do not adequately take into account." Moreover, as we shall see, the notion that so-called "overweight" people should try to become very thin, i.e., should try to move into the low end of the "normal" BMI range, is, given the actual epidemiological evidence, nothing less than bizarre.
In 2005, the Harvardistas were thrown into a panic when a study by Katherine Flegal and others appeared in the Journal of the American Medical Association. This study found 86,000 excess deaths per year in the United States among so-called "normal weight" people, when compared to so-called "overweight" persons. In other words, "overweight" people had the lowest mortality risk. The Harvard people quickly organized a press conference at which they denounced the study's results, and claimed its authors had failed to take into account smoking and preexisting disease.
But that clearly wasn't true. The JAMA study's authors explicitly stated that they had done calculations excluding smokers and controlling for preexisting disease, and that employing such exclusions in the published results would not have altered the paper's conclusions. They even published supplementary data on the Centers for Disease Control and Prevention's website showing precisely what their results looked like when they controlled for these factors. None of this has made much difference. Two and a half years later, you can read a story in the September issue of Scientific American in which Stampfer and Willett repeat the claim that the JAMA study didn't control for smoking and preexisting disease.
When the baselessness of their criticisms of the JAMA paper are brought up, the Harvard people fall back on the claim that this is just one study out of thousands, and that almost all the rest support their claims about the dangers of "overweight." This claim is equally false. Far from being unusual, the JAMA paper's results mirror the overall state of the medical literature (as the citations in the paper itself make clear). "Most studies actually have produced results closer to the data of Flegal et. al.," says Glenn Gaesser, a University of Virginia professor of kinesiology. Gaesser recently undertook a survey of papers published in 2007 that reported data on the relationship between BMI and life expectancy. The vast majority--around 80%--found either no elevated mortality risk associated with "overweight," or the lowest mortality in the "overweight" range.
In particular, it's difficult to find studies in which mortality at the lower end of the "normal" range isn't quite a bit higher than at the low end of the "overweight" range (the absolute low point in the mortality curve tends to be at the border between "normal weight" and "overweight," or in first couple of units of the "overweight" range). Thus Willett's claim that people should strive to be in the lower end of the "normal" range flies directly in the face of the actual data.
So what evidence do the Harvard people cite? Not surprisingly, their own studies--most frequently the Nurses Health and Physicians Health studies: long-running observational studies featuring over one hundred thousand participants. But these studies also feature a number of serious problems. For one thing, sometimes they produce the "wrong" results. For example, in 2000 Willett co-authored a paper indicating that younger men with BMIs of 25-27 didn't have elevated mortality risk when compared to "normal weight" younger men, and that, among men older than 65, BMI was unrelated to mortality altogether. How does Willett deal with this inconvenient data? By citing it to support the opposite conclusion, as in this quote from a paper he co-authored earlier this year: "During the last two decades, accumulating epidemiological data have strongly suggested that overweight and obesity cause premature death." Willett's own work calls that statement into question.
Often, however, the Harvard people publish papers that, unlike most of the medical literature, find a linear relationship between increasing BMI and increasing mortality risk, once one is above the "underweight" range. The authors get these results by using a very suspicious method: they exclude from their analysis most of the deaths in their participant pool. Indeed, it's not unusual for the Harvard group to exclude as many 85% or 90% of the deaths that occur in their studies. Richard Cooper, Chair of the Department of Preventative Medicine and Epidemiology at Loyola-Chicago Medical School, points out that this looks very much like "data dredging" or "data trimming," i.e., running your data with various extreme exclusionary criteria, until you get the "right" results. Furthermore, "They have no real evidence that excluding all these participants is the right thing to do," he told me. "The Harvard people talk a lot about reverse causality [the idea, for example, that people at the low end of the normal BMI range have relatively high mortality rates because of smoking or sickness], but almost all the studies to date indicate that excluding smokers at the low end of the weight scale makes no difference." Needless to say, such criticisms cast a rather ironic light on the Harvard group's ongoing attacks of the JAMA paper's authors for not excluding enough participants from their study.
In addition, Cooper points out that the JAMA authors are using a data set--the NHANES survey--that almost all epidemiologists consider to be far superior to the Harvard participant pools. (Among other things, the NHANES study is calibrated to reflect the demographic makeup of the U.S. population and the participants' actual heights and weights are measured periodically. By contrast, the Harvard studies are based on questionnaires filled out by doctors and nurses). "The NHANES data are a thousand times more reliable than [the] Nurses Health and Physicians Health [studies]," Cooper told me.
Perhaps in part because of the power the Cambridge cabal wields through peer review, grants, and recommendations, few physicians have openly dissented from its conclusions. Not surprisingly, then, much of the criticism the Harvard crew gets comes from people in other fields: from sociologists, political scientists, senior government researchers, and yes, even a law professor or two. Predictably, this leads people like Willett and Stampfer to complain that their critics "aren't doctors." Leaving aside that some of their critics are doctors, it's unclear why the opinion of doctors regarding the interpretation of thousands of epidemiological studies should be valued more than those of social scientists whose professional training involves this sort of meta-statistical analysis.
Of course, one reason the Harvard claims are treated with such respect is that they tell people what they want to hear. Their claims dovetail perfectly with social prejudices that declare one can never be too rich or too thin, and with the widespread desire to believe that sickness and death can be avoided if one follows the rules laid down by the appropriate authority figures. Combine these factors with the social cachet wielded by the Harvard name, a willingness to make brazen assertions that run from serious exaggerations to outright lies, and lazy journalism of the "some say the Earth is flat; others claim it's round; the truth no doubt lies somewhere in the middle" type, of which the Scientific American article is only the most recent example, and you have a recipe for an epidemic of wildly misleading statements dressed up in the guise of authoritative scientific discourse.
By Paul Campos