On June 29, 1967, Jayne Mansfield, a 34-year-old film actress and nightclub performer, was traveling to New Orleans from Mississippi in a Buick Electra when the car plowed into the back of a tractor trailer. The driver was a young man employed by the Biloxi dinner club where Mansfield had just wrapped up the 11 p.m. show. Mansfield and her boyfriend were crammed up front with him so that her three young children could sleep in back. The road was narrow and poorly lit, and investigators later concluded the driver didn’t see the truck until the very last moment. He swerved left, but it was too late. All three adults were killed instantly. The children survived with minor injuries.
Mansfield had achieved fame in the 1950s in films like The Girl Can’t Help It (1956) and Will Success Spoil Rock Hunter? (1957). Her star had faded in the 1960s, but she remained sufficiently well-known that the details of her death drew extensive press coverage. They were gruesome. The front of the Buick had slid under the tractor trailer, shearing off the roof and smashing the skulls of Mansfield, her boyfriend, and the driver. Initial news reports that Mansfield was decapitated were untrue, but Mansfield’s undertaker later said he found her body in a condition that was “as bad as you get in this business.”
Washington took notice. Since 1953, the Interstate Commerce Commission (an agency that no longer exists) had required big trucks to have underride guards—those horizontal bars you see hanging below the rear of a semi to act as a rudimentary bumper—whenever the truck’s cargo bed stood 30 inches or more above the ground. But the rule didn’t specify how strong the underride guard had to be—how able to absorb the force of a rear collision without giving way. Quite obviously, the 1953 standard had failed to save the lives of Mansfield and the other adults in that Buick Electra. So less than four months after the accident, President Lyndon Johnson’s Federal Highway Administration, or FHWA, announced plans to impose stricter performance requirements on underride guards. The FHWA’s advance notice of proposed rulemaking was published on October 14, 1967. The regulation itself was published on January 24, 1996.
No, that isn’t a typo. The federal government took 28 years and three months to get its underride guard rule out the door. By then, nearly 9,000 more people had died the same way Mansfield had, by sliding under a big semi.
Why was three decades’ deliberation necessary to impose such a commonsense safety precaution? Because Mansfield met her fate just as the economics profession was advancing, like an occupying army, into noneconomic agencies of the federal government. The result was a mindset—an ideology, really—that dominates public policymaking to this day. The Marxists (of whom I am not one) have an excellent term for this ideology: Economism. At a time of extreme political polarization, an Economicist bias (pronounced eh-co-nom-i-sist) is practically the only belief that Democrats and Republicans share.
The Economicist ascendancy helped identify priorities and impose order on the expanding postwar federal government, injecting a welcome dose of rigor. But it also encouraged a retreat from activist government, one that started to firm up under President Jimmy Carter and then snowballed under President Ronald Reagan. The Economism of Republican administrations, influenced by “freshwater” economists like the University of Chicago’s Milton Friedman, was much more conservative than the Economism of Democratic administrations, influenced by “saltwater” economists like the Brookings Institution’s Charles Schultze. But both schools were conservative in their preference for market solutions, their bias against “command and control” regulation, and their distrust of the sort of bold government experimentation that characterized the New Deal. After the 2008 housing crash, Economism lost much of its luster in the academic world, and under President Joe Biden we may be seeing tentative signs (in, for example, this year’s Inflation Reduction Act) that congressional Democrats feel less beholden to Economicist dogma. But to make a clean break, Democrats need to understand how Economism conquered Washington, and how the reduction of noneconomic policy choices to mathematical models and formulas wreaked havoc on many efforts to address the country’s most urgent problems.
Economists first flocked to Washington in large numbers in the 1930s to help President Franklin Roosevelt manage the Great Depression and implement the New Deal. By 1938, the federal government employed about 5,000 economists, most of them in Washington, according to a 1985 lecture (“The Washington Economics Industry”) delivered by the late economist Herbert Stein. Stein, chairman of the Council of Economic Advisers under Presidents Richard Nixon and Gerald Ford, had been one of those 5,000. Forty-two years later, Stein reported, the number of economists in the federal government had more than tripled, to 16,000.
Stein identified the watershed as World War II, when economists branched out into defense policy. “It has been said that the last war was the chemist’s war and that this one is the physicist’s,” Paul Samuelson observed in The New Republic in 1944. “It might equally be said that this is an economist’s war.” Samuelson himself worked on complex engineering problems arising from the use on warships of a new technology called radar. John Kenneth Galbraith worked on the United States Strategic Bombing Survey, which concluded that aerial bombardment was much less effective than the military had supposed. Friedman worked (unsuccessfully) to develop an alloy for jet engines that wouldn’t melt at high temperatures. Out west in Santa Monica, the Rand Institute, created by the Air Force immediately after the war, assigned the question of how to fight future wars, including possible nuclear war, not to military experts with battlefield experience, but rather to economists ratiocinating by the Pacific.
Economics became, Stein recalled, Washington’s lingua franca. “It was almost,” he said,
as if someone had suddenly decreed that the language of the government would be Latin. There would be a great demand for people who could speak Latin. So there was a great demand in Washington for people who could speak economics. There was also a large supply of them, who had come for the war and didn’t want to go home again.
By 1967, the Economicist army had established a beachhead at the newly created Department of Transportation, where the FHWA and, later, the National Highway Transportation Safety Administration, or NHTSA, resided. As these agencies developed various iterations of a regulation mandating strengthened underride guards (informally dubbed “Mansfield bars” in honor of Jayne), a succession of presidential administrations subjected them to analyses that weighed the cost to the trucking industry against the benefit in human lives saved. How much was each human life worth? When the Nixon administration addressed this question in 1971, human life was assigned no value at all—wasn’t that what “priceless” meant?—so of course the Mansfield bar flunked. It was, according to one of the principals, the first decision NHTSA ever made using cost-benefit analysis. Today, all major regulations in all departments of government must pass a cost-benefit test.
The Ford administration considered Mansfield bars again in 1974. This time, a human life was valued at $200,000, based mostly on the lost-income model favored by life insurers, and the rule flunked again. The Clinton administration weighed Mansfield bars a final time in 1996. By now, a human life was valued at around $3 million, based mostly on pay differentials between risky and non-risky manual labor and on surveys asking how much people would pay to avoid risk of death. This technique, still in use, was derived from an influential 1968 essay (“The Life You Save May Be Your Own”) by the Harvard game theorist Thomas Schelling. Three million bucks did the trick, and the regulation was finally promulgated. Today, the federal government values a human life at about $10 million.
Let me state before proceeding further that I harbor no ill will toward the economics profession—some of my best friends, etc.—nor toward mainstream economics as practiced, say, at the Congressional Budget Office (founded in 1974) or the Federal Reserve. In addressing matters of economic policy, it’s the best we have until something better comes along. I’ll even concede that Economism has notched some significant victories over the years. For example, the Environmental Protection Agency’s cap-and-trade program to reduce acid rain emissions through the buying and selling of pollution “allowances” is a proven success. When economic insights and/or market solutions achieve the desired policy result, I applaud them.
But Economism isn’t merely a governing tool; it’s become just about the only governing tool. For half a century, economists have had their finger in every conceivable pie. It was economists, led by Friedman and by Walter Oi of the University of Washington, who showed President Richard Nixon how he could fulfill his 1968 campaign promise to end the Vietnam draft, quieting student protests but a generation later rendering feasible a 20-year military stalemate in Afghanistan. It was economists, led by Alfred E. Kahn of Cornell, who persuaded President Jimmy Carter to deregulate the airline industry, reducing prices but also turning legroom into a commodity for which wealthier passengers pay extra. It was economists, like George Stigler of the University of Chicago, whose work led jurists Richard Posner of the 7th Circuit, Robert Bork of the D.C. Circuit, and Lewis Powell of the Supreme Court to eviscerate antitrust enforcement, enthroning the consumer at great cost to the laborer. “I really don’t know one plane from the other,” Kahn cheerfully confessed. “To me they are all marginal costs with wings.” He was chairman of the Civil Aeronautics Board at the time.
Economists have thrown their weight around throughout the Covid pandemic. Early on, Johns Hopkins medical school issued a call for survey data on how far the disease had spread and how often it led to serious illness or death—signed by two economists and only one epidemiologist. The press release reassured the public that “economists and epidemiologists” were “not at odds, but in agreement about this.” Think about that. Hopkins medical school is consistently rated one of the top five in the country. Yet even there, an epidemiologist dared not make an uncontroversial public health pronouncement in the midst of a pandemic without invoking the unimpeachable authority of two economists.
President Donald Trump went out of his way to denigrate all manner of experts, but even he deferred to his economic experts. In April 2020, a New York Times story reported that the “highest-level alert known to have circulated in the West Wing” early that year about Covid’s dangers came from Trump’s trade adviser, Peter Navarro. ProPublica later reported that in March 2020 Navarro mucked around in the awarding of government contracts for respirators and other medical supplies. Navarro also pressured the Food and Drug Administration to reinstate emergency use authorization for hydroxychloroquine, a potentially toxic malaria drug that has since repeatedly been shown to be useless in treating Covid. “I always considered Navarro’s role in warning of Covid’s danger as an outgrowth of his anti-Chinese views,” Bob Davis, who covered trade for The Wall Street Journal, told me. “Remember, he wrote a book called Death by China…. So anything coming out of China, including a bug, would be a big danger to him.”
What made Navarro, a Ph.D. in economics, think himself qualified to make public health judgments? In his 2021 book, In Trump Time: A Journal of America’s Plague Year, Navarro cited his youthful Peace Corps experience in Thailand; his certification five decades ago as a biomedical equipment technician; and the Matt Damon movie The Martian, which taught him that “in the face of mortal danger, it doesn’t matter one whit what your expertise is.”
In academia, the Economicist ascendancy reached its zenith in 1992, when University of Chicago economist Gary Becker won the Nobel for applying the principles of economics to (among other topics) criminal justice, marriage, and racial discrimination. Economism conquered popular culture in 2005 with the publication of Freakonomics by Steven D. Levitt, yet another University of Chicago economist, and Stephen J. Dubner, a freelance writer who’d previously profiled Levitt in The New York Times. Freakonomics was a sort of Economicist Manifesto for the masses. Market reasoning, the authors argued, could explain, well, anything: why people cheated at games, why crime decreased, why prostitutes outearned architects, what made children perform better in school, why Black viewers didn’t watch Seinfeld. Morality, the authors explained, “represents the way that people would like the world to work—whereas economics represents how it actually does work.” The arrogance was magnificent. Readers couldn’t get enough. Freakonomics sold four million copies and spawned three sequels and a podcast.
The Great Recession of 2007–2009 began an anti-Economism backlash that’s gathered strength over time. The critics include (to cite a partial list) mathematician David Orrell (Economyths, 2010); Harvard political scientist Michael Sandel (What Money Can’t Buy, 2012); British activists/economists Joe Earle, Cahal Moran, and Zach Ward-Perkins (The Econocracy, 2016); New York Times editorial writer Binyamin Appelbaum (The Economist’s Hour, 2019); John Maynard Keynes biographer Robert Skidelsky (What’s Wrong With Economics?, 2020); and, most recently, University of Michigan sociologist Elizabeth Popp Berman (Thinking Like an Economist, 2022). The issue has been discussed at length in academic and popular publications and in venues that previously exalted Economism. “The categories and the vocabulary of the market,” Leon Wieseltier, editor of Liberties journal, said at the Aspen Ideas Festival in 2014, “are being used in realms where they do not belong.”
As Berman’s title indicates, the anti-Economicist critique extends well beyond the failures of neoclassical economics, as practiced in Washington leading up to the 2008 housing bust, to question the legitimacy of the entire economics discipline. It’s not unlike what people used to say about the legal profession before computers created a lawyer glut. Remember those Shakespeare-quoting T-shirts that said, THE FIRST THING WE DO, LET’S KILL ALL THE LAWYERS? Now the people everybody wants drawn and quartered are economists, because they’re fed up with Economism.
The anti-Economicist urtext was published in 1944 by Karl Polanyi, an émigré Austrian socialist then teaching at Bennington. The Great Transformation blamed World War II on the “utopian” delusion that society’s needs could be subordinated to those of the market economy. Rejecting both the Marxian notion that class conflict propelled social progress and the classical liberal belief that market forces were rooted in nature, Polanyi said economics was created by society and must be made to serve its needs. Yet society insisted on believing the opposite, that “the individual should respect economic law even if it happened to destroy him…. Nothing obscures our social vision as effectively as the economistic prejudice.”
The post-2009 wave of anti-Economicist literature extends Polanyi’s theme, identifying the profession’s various blind spots even as the federal government continues, indiscriminately, to apply economic reasoning to noneconomic policies. Here are five of the stronger anti-Economicist criticisms:
Economists overvalue modeling. On a 2008 visit to the London School of Economics, the late Queen Elizabeth flummoxed her hosts by asking why no one had seen in advance the financial crisis then well underway. The answer was that economic forecasting just isn’t very reliable. The best a 1984 Harvard Business Review article could say in its favor was that it was better than nothing. Since then, economic modeling has gotten only marginally better at seeing into the future, despite the advent of higher-power computers able to measure perhaps 20,000 times as many variables. People sometimes compare economic forecasts to weather forecasts, but that’s out of date, because computers have made weather forecasting much better; where once your weatherman could see only two or three days in advance, now he can see one or two weeks in advance. Hurricanes no longer surprise us. Financial crises still do.
The difficulty is that, while the variables in weather patterns, however numerous, are inanimate, the variables in economic patterns are human beings with minds incorrigibly of their own. In 1991, Larry Summers, then chief economist at the World Bank, said that “the laws of economics, it’s often forgotten, are like the laws of engineering. There’s only one set of laws and they work everywhere.” Summers’s youthful arrogance becomes even more cringeworthy when you learn he was evangelizing about opening the former Soviet Union to capitalism—an excellent idea pursued in such breezily unstructured and corrupt fashion that it beggared Russia’s population, created a Russian mafia, elevated Vladimir Putin to the presidency, and returned our new friend, Russia, to the status of geopolitical adversary. Summers’s battle cry to transform Russia from Communist state to what turned out to be a semi-fascist and not terribly prosperous kleptocracy is an excellent example of what Thomas Piketty, in his 2014 book, Capital in the Twenty-First Century, judges the economics profession’s “immoderate use of mathematical models.” Even if you accept, as classical economics commands, that Economic Man is supremely rational, rationality isn’t the same as simplicity. Answering Queen Elizabeth’s question in 2009, the New York Times columnist and Nobel economist Paul Krugman corrected her nineteenth-century countryman John Keats: “The economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.”
Economists undervalue data. How does economic analysis differ from other kinds? The Keynes biographer Skidelsky quotes a piquant observation from the economist Ely Devons (1913–1967). “If econ-omists wished to study the horse,” Devons said, “they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’” The inability of the economics profession “to validate its most important hypotheses empirically,” Skidelsky writes, “means that it has a strong tendency to slide into ideology.”
This vice is comparatively recent, according to Berman. The economists who poured into Washington during the 1930s, she writes, were practitioners of “institutional economics,” which was much more interested in history and the dynamics of change than in mathematical theory, and also much more liberal. Thorstein Veblen, the economist who coined the phrase “conspicuous consumption,” was a founder of institutional economics, and a Veblen student, Isador Lubin, ran the Bureau of Labor Statistics under President Franklin Roosevelt. By the 1940s, though, the institutionalists’ academic prestige was dwindling, and within a decade, institutional economics was a dead letter, displaced first by Keynesianism, which was more mathematical, and later by neoclassical economics, which was overwhelmingly mathematical.
“If the theory conflicted with the empirics,” Berkeley economist Jesse Rothstein tells TNR editor Michael Tomasky in Tomasky’s new book, The Middle Out, “you would ignore the empirics and focus on what the theory said.” Rothstein says there was a practical reason for that: “The empirics weren’t very good. We didn’t have much data.” Data was, Stanford economist David M. Kreps recalled in a 1997 essay, “hard to gather and relatively expensive to process.” That started to change in the 1990s, Rothstein explains, when better data, better computers, and better empirical methods became available. One result was a 1994 paper by Berkeley economist David Card and the late economist Alan Krueger of Princeton on the minimum wage that challenged what The New York Times had called “a virtual consensus among economists” that minimum-wage hikes always reduced employment. Card and Krueger examined comparative data from Pennsylvania and New Jersey and concluded it did not. Alas, congressional Republicans, along with a few conservative Democrats, don’t want to hear it. Clinging to economists’ previous, theory-based model, they have for 15 years refused to raise the federal hourly minimum wage above $7.25—nearly $5 lower, after inflation, than what Martin Luther King protested against in August 1963 at the same Washington march that culminated with King’s “I Have a Dream” speech.
Economists don’t get societies. “There is no such thing as society,” the late British Prime Minister Margaret Thatcher famously said in 1987, and (quite rightly) she caught hell for it. But economics—not just the Austrian school of Thatcher’s beloved Friedrich Hayek—is premised on the choices of individuals, not societies, and therefore tends to miss how individuals within a society affect one another. “According to the atomic theory of economics,” writes the mathematician Orrell, “individual people or businesses are supposed to be independent of one another, so they are uninfluenced by each other’s decisions…. But in fact every decision we make is affected by what is going on around us—and not just during financial booms and busts.”
The Economicist presumption that individuals operate in isolation lies at the heart of the problem known as the Tragedy of the Commons, first described in 1968 by Garrett Hardin, a biologist at the University of California, Santa Barbara. If farmers shared a common, the theory went, it would get overgrazed, because each farmer would value only his own personal interest in maximizing its use. Ironically, it took an economist, Indiana University’s Elinor Ostrom, to note the Economicist bias in biologist Hardin’s theory. In the nontheoretical world, Ostrom pointed out, societies had shared resources for hundreds of years, because people weren’t half so collectively dumb as Hardin supposed; they understood their livelihoods depended on preserving common resources and, under the proper conditions, would temper immediate private gain to protect public property shared by all. Ostrom was able to identify multiple examples—data!—including a Swiss Alpine village called Törbel that very sensibly had managed cows and goats grazing on land held in common all the way back to 1483. Ostrom won a Nobel for this work in 2009, 17 years after Becker.
Economists don’t get irrationality. The relatively new field of behavioral economics is premised on an idea that’s been staring economists in the face since Adam Smith wandered into a pin factory: People are frequently irrational. Skidelsky points out that the first Nobel for work in behavioral economics, in 2002, had to go to a psychologist, Daniel Kahneman, “because the standard behavioral assumptions of economists have been so thoroughly unrealistic.” Among the examples of common irrational behavior that Kahneman wrote about with his collaborator, Amos Tversky, who died in 1996, are loss aversion (we dislike losing even those things we don’t especially value), confirmation bias (we notice what confirms that which we believe and ignore whatever contradicts it), and the sunken-cost fallacy (we keep fighting unwinnable wars, as in Afghanistan, because we don’t want our past war dead to have died in vain).
Often what an economist would call “irrationality” is actually an ethical belief system in conflict with market forces. Sandel, the Harvard political scientist, builds his book around economically rational options that violate ethical norms: A California prison where inmates can get better cells if they pay $82 per night; concierge doctors who sell the right to call their cell phones; the sale by older persons of their life insurance policies; and so on. The rise of Economism, Sandel notes, has altered some ethical norms by systematically legitimizing social inequalities. For example, Sandel’s book was published less than a decade ago, yet already his shock at the existence of concierge medicine seems a little dated. We’ve gotten used to it.
Appelbaum, the New York Times editorialist, describes a classroom experiment in which a professor gives half his students $10 and the other half nothing. Students who receive cash are instructed to give some to a student who got none, but the amount is up to the giver. What tends to happen is that if the giver offers less than about $3, the receiver refuses it as insultingly small; never mind that accepting even a pittance yields a net financial gain.
Economists don’t get people who aren’t economists. A central complaint voiced by the British activists Earle, Moran, and Ward-Perkins is that economists make insufficient effort to communicate with the public. Economists, they write, claim to be concerned with mere technical issues when in fact they engage questions vital to democracy that economic jargon renders incomprehensible to most people. “This reality,” they write, “makes econocracy incompatible with one of our greatest political traditions, liberal democracy.” The authors are young Ph.D.s in economics who are part of an international network, called Rethinking Economics, that’s dedicated to making economics more accessible, more diverse by race and gender, and more directly engaged with questions of social justice.
As a group, economists are susceptible to certain prejudices. A 1993 study led by Cornell’s Robert H. Frank showed that economists are likelier than the public at large to behave like Economic Man, pursuing self-interest at the expense of cooperation. A more recent study, in 2011, by two University of Washington economists, found that economists tend to be tightwads, at least when it comes to making charitable donations. The economics profession is still very male; less than one-quarter of tenure-track university positions are held by women. And it’s still very white. A 2022 study by the Peterson Institute for International Economics found economics to be the single least socioeconomically and racially diverse of any major Ph.D. discipline. Even mathematicians were more diverse.
It stands to reason that if economists inhabit an insular culture, any influence they extend over government policy will be somewhat blinkered. In the same New Republic article where Paul Samuelson declared World War II “an economist’s war,” he leveled some sharp criticism about the complacency of Washington economists:
The Washington economist lives in a world frequented by his own kind. I know, for I have lived in that happy world. When he hears that someone else has arrived at the same optimistic estimate as his own, he takes this to be independent corroboration of the truth of his view instead of realizing that it is simply the reflection of his own last week’s expression of his own opinion. This process of mutual infection and amplification of opinion is cumulative and self-aggravating, so no wonder his conviction grows without bound.
This was published in September 1944. A year later, the echo chamber Samuelson described would begin its postwar advance into the rest of government. Economics remains, as Herbert Stein described it four decades ago, Washington’s lingua franca. Now Washington must become multilingual—to learn to examine noneconomic problems from other points of view.
Many people propose as a solution reforming the economics discipline itself. Earlier this year, the Hewlett Foundation and the Omidyar Network announced $40 million in grants to “multi-disciplinary academic centers dedicated to reimagining the relationships among markets, governments, and people.” More grants are on the way from the Ford Foundation and Open Society Foundations. I wish these efforts well; neoclassical economics could use a good kick in the pants. But where the problem is Economism’s intrusion into noneconomic areas of governance, I’m skeptical that newer and better varieties of intrusion can help. The cure for Economism isn’t more economics, but less. Economics needs to back the hell off. It can remain a tool—I hope I’ve made clear that economics can be a very useful tool—but it can no longer remain the only tool, or even necessarily the primary one. Three areas of noneconomic government policy come immediately to mind where Economism holds too much sway and must be made to retreat.
Privatization is premised on the Economicist notion that markets are more efficient than government bureaucracy at providing services. That isn’t always wrong. But the efficiencies come at the expense of democratic accountability. Nowhere is this problem more glaring than in the administration of justice. “Far too frequently,” explained a 2020 report by the American Bar Association, “government authorities allow private companies to operate in the criminal justice system with little or no oversight.”
Starting in the 1970s, the courts began imposing user fees on defendants to defray the rising cost of administering criminal justice, which, like rising crime itself, had voters up in arms. Courts had long charged criminal defendants bail to reduce the risk of flight; state and local jurisdictions now added to the tab the cost of pretrial supervision, drug testing, electronic monitoring, and so on. But because providing these pretrial services was a nuisance, many courts offloaded these functions to private companies. The defendant, instead of paying a user fee to the county, pays the user fee to a private contractor. That puts the contractor, who operates on a for-profit basis, in an enviable bargaining position: Pay me outrageous fee x or decline pretrial supervision and/or an electronic bracelet, the contractor can say to the accused, and sit in jail while you await trial. The fee just for pretrial supervision can be as high as $300 per month. Depending on the jurisdiction, it may or may not be reimbursed if the defendant is found innocent.
If the defendant is found guilty, he may be one of the roughly 100,000 prisoners assigned to a private prison. President Joe Biden issued an executive order last year not to renew contracts with private prisons, but that didn’t cover the detention of about 27,000 undocumented aliens held in private prisons by U.S. Immigration and Customs Enforcement, or about 71,944 prisoners held in private prisons by the states, according to numbers for the year 2020 compiled in August by the Sentencing Project, a nonprofit. Twenty-six states use private prisons; in Montana, they account for fully one-half of all prison inmates.
Do private prisons save taxpayers money? Sometimes. But ask yourself how. Typically, they achieve savings by running their facilities on the cheap. Why shouldn’t they? Nobody’s watching. According to a 2016 report by the Justice Department’s inspector general, the Bureau of Prisons did a poor job monitoring the private prisons it had under contract to make sure they met BOP standards. Comparing 14 contract prisons with 14 BOP-run prisons, it found the contract prisons had more assaults, more inmate grievances, more lockdowns, and nearly twice as many guns and other weapons confiscated from prisoners. With privatization, wrote the late economist Frank Ackerman and Georgetown law professor Lisa Heinzerling in their 2004 book, Priceless: On Knowing the Price of Everything and the Value of Nothing, “anything profitable that is not prohibited by law is likely to occur” (italics theirs).
The story of U.S. health care over the last century is a succession of ever-more ambitious experiments to see whether medical services can be made broadly available on a for-profit basis. Every experiment failed, always for the same reason: The market doesn’t want society to share equally in paying the cost of health care. It wants the biggest consumers (i.e., the sickest people) to pay more. A lot more. That’s how markets work.
As the journalist Jonathan Cohn explains in his books Sick (2008) and The Ten Year War (2021), it was in the 1920s that health care first became sufficiently sophisticated and modern as to become unaffordable.
That was market failure number one.
To fix this, Baylor hospital in Dallas established, in 1929, the first health insurance plan, which caught on at other hospitals and eventually became Blue Cross. The Blues were nonprofit, because these were hospitals, for God’s sake, not Macy’s. That was the prevailing attitude. Everybody got charged the same premium, regardless of age, sex, or preexisting conditions, and the premiums were low.
The Chinese wall between health insurance and Macy’s didn’t last. Private (that is, for-profit) insurance companies moved into the market en masse during World War II after the federal government exempted the cost of health insurance benefits from wartime wage ceilings. Since businesses couldn’t compete for talent by offering higher wages, they competed by offering health insurance plans, increasing demand for those plans. Private insurance companies met that demand with supply. Over time, these private companies applied their expertise in calculating relative risk, charging riskier customers higher premiums and avoiding very risky customers altogether. That allowed them to collect lower premiums from the great majority of their customers, who were perfectly healthy. To compete, the Blues had to follow suit, and eventually they gave up their nonprofit status. As health care got even more sophisticated and modern, it got even more expensive. Medical premiums went up, and health insurers got more aggressive about pushing costs back on to patients.
That was market failure number two.
To fix that, starting in the 1970s, Health Maintenance Organizations, or HMOs, began to proliferate. These were a lot like the earlier Baylor hospital model: a single health insurance policy you could use at a single hospital. Like the Blues, HMOs were initially nonprofit. Like the Blues, HMOs eventually became for-profit. Rising health care costs eventually made even premiums for HMOs too expensive.
That was market failure number three.
Now it was the 1990s, and health care professionals had given up all pretense that medicine was a branch of philanthropy. The late Arnold Relman, longtime editor of The New England Journal of Medicine, assigned much of the blame to the Supreme Court, which in Goldfarb v. Virginia State Bar (1975) barred professional associations from engaging in “anticompetitive conduct.” As a result, the American Medical Association could no longer maintain professional rules against doctors advertising their services or owning a financial interest in any drugs or machinery or labs. A follow-up 1982 Supreme Court decision in Arizona v. Maricopa County Medical Society barred professional societies from imposing any ceiling on medical fees. Doctors could now become entrepreneurs, and they did. Cities and counties started closing public hospitals, and, in the aughts, for-profit chains started gobbling up nonprofit hospitals. (More recently, private equity firms have started gobbling up nursing homes.)
The nonprofit sector was done rescuing U.S. medicine from the profit motive, so this time it fell to the government. Hillary Clinton tried as first lady to create a national health care plan. Congress said no. Sixteen years later, President Barack Obama established a national system of government-subsidized health insurance exchanges that Republicans portrayed as socialistic, but which were in fact a market scheme dreamed up by Stuart Butler of the Heritage Foundation and adopted by Massachusetts Governor Mitt Romney (who subsequently spent much of his 2012 presidential campaign against Obama denying the obvious similarities).
After passage of Obamacare in 2010, Republicans tried, repeatedly and unsuccessfully, to kill it. They had better luck maiming it by repealing the “individual mandate” that required everyone to buy health insurance and by choking off tax revenue to fund Obamacare. But the bigger long-term problem is that, as medical costs continue to rise, Obamacare is going to require bigger taxpayer subsidies. At some point, the federal government will have to say Enough! and seize control of medical pricing. At that point, we’ll get some version of Medicare for All.
It has been demonstrated again and again that private insurers can’t restrain health costs anywhere near so effectively as the government can. A 2020 review by the Kaiser Family Foundation of 19 studies, for example, found that private insurers pay nearly twice as much as Medicare in hospital fees, and more than twice as much in physician fees. Most mainstream economists acknowledge that market forces work poorly in health care, a point powerfully argued by Stanford economist Kenneth Arrow as far back as 1963. But the Economicist faith dies hard in politicians, especially Republican ones, and in a voting public that’s been taught for decades to believe the market will always outperform the government.
In January 2019, a group of very distinguished economists and economic policymakers published a joint statement in The Wall Street Journal calling a carbon tax “the most cost-effective lever to reduce carbon emissions at the scale and speed that is necessary.” The tax, they wrote, should rise every year until reduction goals were met. “A consistently rising carbon price will encourage technological innovation,” they wrote, “and large-scale infrastructure development.” This all made excellent sense to me at the time, and until recently I judged a person’s seriousness about addressing climate change according to that person’s willingness to support a carbon tax. Who was I to question the wisdom of the 28 Nobel laureates in economics who have signed the letter?
One Nobel laureate in economics who did not sign the letter was Paul Krugman. Three months after the letter was published, Krugman argued on Twitter that it was “dubious economics” to rely too much on emissions taxes to address climate change, because “you also need public investments in infrastructure and new technology. The huge recent cost reductions in renewables didn’t just happen—they were partly the result of Obama-era investments.” State environmental regulators have mostly given up on Economism. Dana Stein, a member of Maryland’s House of Delegates and vice chair of its environment and transportation committee, told me that years ago he’d go to legislative conferences and hear economists discuss the carbon tax at length, but that doesn’t happen much anymore. “The focus is much more on regulation, not market-based approaches.”
In the Inflation Reduction Act, or IRA, which spends $369 billion to address climate change through tax breaks and grants to green technologies, the Biden administration and Congress mostly reject Economism, in this sense: It is the most ambitious government investment ever made to address climate change, and it doesn’t include a carbon tax.
I still think a carbon tax is a good idea. But getting a carbon tax of any kind through Congress has proved nearly impossible, despite bipartisan support. Getting a carbon tax priced high enough to make a significant enough difference seems an even more remote possibility. As Lydia DePillis noted in a thoughtful New York Times article published in August, Yale economist William Nordhaus, the leading proponent of the carbon tax, suggested in 2018 a tax possibly as low as $43 per ton of carbon emissions, which a lot of people think seriously underestimates the damage climate change causes. The model also assigns less value to future harm than to current harm, a common flaw in economic modeling. Nordhaus cited the $43 figure in his Nobel speech—yes, Nordhaus has a Nobel prize in economics, too—and, to be fair, he also cited an upper bound of $108. But in Sweden, where Nordhaus gave his Nobel lecture, they already tax carbon at $126 per ton.
As the Loire and the Rhine and the Yangtze and the Colorado rivers all dry up, perhaps it’s time to quit trying to pinpoint the precise financial cost of climate change. Knowing that number won’t make the floods and the hurricanes and the droughts stop. Instead, we should focus on developing and deploying, as fast as we can, technologies to reduce climate change, cost be damned. That’s what we do in wartime, and this is a war against global catastrophe. As Krugman noted, previous government investments have yielded enormous potential benefits. The IRA will yield more. With the IRA, the government is taking the problem away from economists and handing it to climate scientists and engineers. Why didn’t we think of that before?
Because Economism is out of control. Those Mansfield bars? In 2015, NHTSA proposed a regulation requiring that underride guards meet a higher standard of strength and energy absorption, because every year more than 200 people die, on average, the same way Jayne Mansfield did more than half a century ago. Still. The final rule came out this past July, but only after New York Democratic Senator Kirsten Gillibrand inserted into the infrastructure bill language telling NHTSA to get off the dime. It’s progress of a sort to wait seven years for a safety regulation instead of 28 years, and 1,400 deaths is fewer than 9,000. But when the new regulation was finally published, Joan Claybrook, who was NHTSA administrator in the 1970s, said it was wholly inadequate—“an affront to the families of underride victims.” Other safety advocates seemed to agree. Why wasn’t the Mansfield bar rule stronger? Because, the economists tell us, a human life is worth only so much.