You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Risk Is the Most Important Issue in American Politics

Scott Olson/Getty Images News/Getty Images

Just how Americans are supposed to face the personal risks that life inevitably entails has become the dominant political issue of our time. The chief domestic policy issue in last fall’s presidential election was medical risk: whether to require all citizens to carry health-care insurance; and if so, who should provide it; and above all, how to pay for it. The risks inherent in old age and the anxieties about Social Security and Medicare, the government’s two most important programs for helping citizens to manage them, are sure to drive the debate in at least the next few elections. In time, the transfer of private retirement risk from our large corporations to their individual employees, achieved by the almost universal shift from traditional defined-benefit pensions to 401(k) plans and other defined-contribution vehicles, will also take center stage.

Like people everywhere, Americans have always faced the risks to which daily life exposes them. And as in industrial and post-industrial societies elsewhere, the substance of these risks has changed over time, not just with advances in medical technology (citizens of high-income countries now take for granted recovery from diseases from which their grandparents were resigned to die) but with the evolving economy, too. The risk to income and livelihood is different when the majority of the workforce is employed by large firms than when most people earned their living from family farms. It also differs when a large and growing share of a country’s jobs is subject to the forces of global competition.

But American attitudes toward risk and risk-taking have been different from those in other countries. As Tocqueville observed nearly two centuries ago, taking on personal risk in America has always been bound up with fundamental notions of freedom and independence. This deep ideological connection has resonated well, at least in principle, with many of the major transitions in our country’s past: westward expansion, initially led by pioneering new farmland; the emancipation of the South’s nearly four million slaves; the early years of American industrialization, initially spurred by the small-scale efforts of individual inventors and entrepreneurs; and the more recent reprise of that experience in the explosion of Silicon Valley electronics.

Other aspects of the American experience, such as the emergence of mass production carried out by large-scale corporations, have been more problematic from this perspective. But mass employment is hardly limited to the now-vanishing industrial sector. Today more than one million people work at Wal-Mart, and half of the country’s private-sector labor force works for firms with more than five hundred employees. The recent financial crisis has thrown up new troublesome questions about economic risk-bearing, including the extent to which the banks that the government saved have now, for practical purposes, evolved into public utilities.

In Against the Gods, one of the classics of historical financial writing for a broad audience, the late Peter Bernstein recounted the conceptual origins and early implementation of Western ideas of risk-sharing. The action in Bernstein’s story began in earnest in the seventeenth century in Europe, where mathematicians such as Pascal and Fermat, followed soon after by Bernoulli, first conceived what we today regard as modern probability theory. They did so in the first instance as an intellectual endeavor, but once the relevant concepts were in place, insurance—also in the modern sense—became possible. By the 1660s, Edward Lloyd’s coffeehouse in London had emerged as a central exchange for merchants insuring commercial seagoing ventures.

As Bernstein’s title suggested, not everyone welcomed this innovation. Bearing life’s risks was supposed to be part of the human condition, and so any scheme for evading them was a subversion of what God—in earlier tradition, the gods—intended for man. (Later on, with rapid advances in agricultural productivity, the movement of most people’s work away from the farm sector raised similar objections: an economy based mostly on non-agricultural employment was a human attempt, presumably doomed, to escape the curse that God had laid on Adam and Eve and their descendants to eat bread “in the sweat of thy face.”) The notion may seem strange today, perhaps even quaint, but the principle of accepting risk as an inevitable part of human existence also resonates with the idea that intrigued Tocqueville: of living with personal risk as an element of independent personhood.

In his fine new book, Jonathan Levy takes up the American part of this story. He not only tells a historical tale but also usefully illuminates the background to two first-magnitude issues confronting the American economy and American economic policy today: to what extent should the government, as opposed to private insurers, absorb the financial risks that we choose not to bear individually? (This is what the “entitlement” debate is really about.) And within the private sector, including not just private insurance companies but also for-profit firms in other industries, how do the rewards that firms’ owners and senior managers receive relate to the risks that they are personally bearing? (This is what the “top 1 percent” debate is really about.) Anyone concerned with these important questions can usefully—and enjoyably—learn from Levy’s fine-grained yet sweeping account.


BIPS/Hulton Archive/Getty Images
LLOYD'S OF LONDON
Contractual insurance, as offered by Lloyd's of London and others, first became commonplace in shipping.

Levy’s story of risk-sharing in America begins in colonial times, with maritime insurance. Spreading economic risk informally, within families and villages, dates to time immemorial,1 but as the early emergence of Lloyd’s in London suggests, doing so on a formal contractual basis first became commonplace in shipping. Commercial maritime insurance arrived in Britain’s American colonies in 1720, and fire insurance followed soon thereafter.2 “Accident insurance” for workers emerged in the 1840s, partly in response to the growth of the railroads. Life insurance also developed mostly in the nineteenth century: in 1825, there were fewer than a hundred individual policies outstanding in the United States, but by 1870, one-third of all adult males in the northeast of the country had them. Insurance against sickness and disability came in the twentieth century. In the early experience that Levy relates, it was private companies that provided all these forms of coverage. Unlike in Bismarck’s Germany, the government in America took on the role of providing such insurance only fairly recently.

Two themes, clearly in tension with one another, run through the whole of this history as Levy tells it. One is the development in America of, as he puts it, “a vision of freedom that linked the liberal ideal of self-ownership to the personal assumption of ‘risk’”—a “new model of liberal economic personhood in which the moral logic of risk was central.” Yet at the same time, Americans also continually sought ways to off-load risk, and all the more so as the evolving economy changed the nature of the risks that they faced. They therefore had to create “a new vision of what it meant to be a free and secure actor,” one that carved out some forms of risk-sharing as compatible with independence and freedom, at least under modern circumstances. As Levy emphasizes, “from the start the identification of running one’s own risk with personhood and freedom went hand in glove with new efforts to financially manage that very same risk.”

And they had to create institutional vehicles for bringing about the off-loading of risk that they sought to achieve without compromising the independence to which they aspired. Private insurance companies proved adequate for some risks—maritime losses; untimely death; later on, lost wages due to sickness or disability—although even for a risk as individualized as death, Levy interestingly illuminates the competition, largely in the nineteenth century, between insurance companies and fraternal associations. Life-insurance companies sold formal policies, with regular premium payments set actuarially and due as a matter of contract; fraternal associations paid death benefits to members’ families from assessments levied on surviving members. In short, the insurance companies won out. But the consequences were more than just a matter of the form of organization through which individuals would share mortality risk. As Levy explains, owing to the contractual nature of level-premium whole-life policies, the insurance companies accumulated reserves that they then needed to invest. Fraternal organizations, levying assessments only as required, did not. The victory of the contractual life-insurance form therefore created a major source of investment capital to finance the rapidly developing American economy.

Other risks, because they were less individualized, were unsuited to absorption by insurance companies. As American agriculture increasingly shifted from subsistence farming to the production of cash crops (and often only one crop, such as wheat, not just on an individual farm but throughout an entire region), farmers found themselves exposed in a more threatening way to fluctuations in market prices. And with advances in transportation that led to shipping grain and other agricultural products around the world, the markets to which they were exposed became increasingly international. American farmers might see high prices due to a drought in Poland, or low ones caused by a bumper crop in Argentina. Here the solution lay in the market mechanism, but again with an interesting twist. Levy recounts the emergence of futures markets, including the competition that pitted organized exchanges such as the Chicago Board of Trade against countless small firms engaged in more or less the same activity, albeit on a more informal basis, across the country’s agrarian sections. In this case, the exchanges won (as Levy shows, with the help of state legislators and regulators, many of whose support they were able to buy).

In time, however, it became clear that industrialization was creating risks that neither insurance companies nor markets were able to take on. Long before the Great Depression of the 1930s, successive financial panics—in 1873, 1893, 1901, 1907, 1913—were triggering business downturns that imposed a loss of jobs and therefore of wages on workers, and also lost production and therefore lost profits on firms’ owners. An especially fascinating part of Levy’s story is the movement, at the beginning of the twentieth century, to make the modern business corporation into a mechanism for containing such risks in the aggregate as well as sharing them across individual workers and firms. As Levy shows, at the time of the great merger movement that created the trusts that gave anti-trust law its name, “socialism” meant an imposed absence of competition—not, as today, public ownership. Business fluctuations were seen as the consequence in large part of a lack of economic coordination. The key to limiting industrial risk was eliminating “ruinous” competition by means of corporate consolidation.

George Walbridge Perkins, who simultaneously served as vice president of the New York Life Insurance Company and as a partner in J. P. Morgan & Co., not only advocated this kind of corporate coordination but personally exemplified it. Perkins regularly funneled investment funds from New York Life to finance the mergers that Morgan arranged. He also helped to oversee business coordination once the mergers were complete. Perkins was Morgan’s point man in 1901 in the transaction that created U. S. Steel by combining Andrew Carnegie’s various companies with other firms engaged in all aspects of steel-making, from ore extraction to finished products, and after the merger Perkins served as a director of U. S. Steel and as chairman of the board’s finance committee. A year later, he was the central figure in the transaction that merged the McCormick Harvesting Machine Company and the Deering Harvester Company to form International Harvester—thereby capturing 85 percent of the American market for farm machinery—and he once again joined the new company’s board and served as the “final arbiter” of its policies for the next ten years.

It may seem odd, in view of his role in creating these trusts of the kind against which Theodore Roosevelt famously inveighed, that George Perkins not only became close to Roosevelt but even served as his campaign manager when he sought to regain the presidency in 1912. But Perkins’s view of what he was trying to achieve with the transactions that he engineered, and in the policies that he then oversaw at both U. S. Steel and International Harvester, bore little resemblance to how today’s merger-and-acquisition bankers would describe their aims. Consolidations of the kind that he had wrought in the steel and agricultural-machinery industries, he argued, represented “a form of socialism of the highest, best and most ideal sort—a socialism that makes real partners of employer and employee and yet preserves the right of private property ... promoting good will and industrial peace.”

Although others did not explicitly call corporate consolidation “socialism,” the thought was widely shared, and not just in business circles. John Bates Clark, an economics professor at Columbia and one of the key figures responsible for founding the American Economic Association (the association’s most prestigious award is still given in his name), emphasized in his writings the “dynamic risks” of capitalism and looked to corporatization to insure against those risks that were not suitable to lay off on insurance companies. In 1921, Frank Knight, an economics professor at the University of Chicago, addressed his classic Risk, Uncertainty and Profit to the distinction between matters of chance that were potentially subject to actuarial quantification, and therefore that in principle some form of conventional insurance could handle, and “uncertainties,” which were not. In Knight’s view, such uncertainty is “ineradicable” under capitalism. And although the trust movement was in retreat by then, he argued that the modern business corporation was an organizational way to contain uncertainty of this nature.

In this dimension as in so many others, the Great Depression marked a departure (one that mostly lies beyond the scope of Levy’s historical account). Whether in a spirit of experimentation or simply out of confusion, the second President Roosevelt confronted the massive economic disaster with a variety of different approaches, including some that plainly contradicted one another. Corporatization—not through merger but by substituting coordination, partly voluntary and partly imposed, in place of competition—was one of them. But once the Supreme Court threw out the National Recovery Administration, FDR’s emphasis shifted to direct government assumption of the economic risks that now seemed both unmanageable and destructive: not just bank-deposit insurance, already enacted in 1933, but unemployment insurance (1935), retirement-income insurance via Social Security (1935), federal crop insurance (1938), and the familiar “alphabet soup” of federal jobs and assistance programs that by the Depression’s end had directly employed or aided one of every five Americans. With various additions—medical insurance for the elderly under Medicare and for the indigent under Medicaid (both 1965), and also others such as flood insurance (1968) and securities-fraud insurance (1970)—government has been central to the way Americans have shared life’s risks ever since.

Similarly, the emergence of macroeconomic policy—efforts by the government to even out the highs and lows of the business cycle, mostly through monetary policy (the Federal Reserve’s management of interest rates) and fiscal policy (variations in taxes and/or government spending)—is likewise a consequence of the Great Depression. Before the 1930s, it was plausible to believe that the economy would quickly self-equilibrate in response to any contraction-producing shock, or that most people were able to protect themselves against such occurrences even if they lasted for a while, or that the government played too small a role in the economy to enable it to have much stabilizing influence anyway. Since the 1930s, and especially since World War II, none of these excuses have been credible.

The American government formally took on responsibility for achieving macroeconomic objectives of this kind with the Employment Act of 1946. (The bill’s originally proposed title was the Full Employment Bill; but for many congressmen of the time, that would have been going too far.) Revisions to the Federal Reserve Act in 1977 charged America’s central bank to maintain both stable prices and maximum sustainable employment. In the modern economy, therefore, macroeconomic policy has become yet another form of risk absorption, cushioning both workers and firms from the aggregate elements of what John Bates Clark called the “dynamic risks” of capitalism.

The federal government’s role as an absorber of risk was especially visible during the recent financial crisis. Acting through Treasury initiatives such as the Troubled Asset Relief Program and the Targeted Investment Program, along with direct lending by the Federal Reserve System, the government saved major banks such as Citigroup and Bank of America and non-bank firms such as the insurer AIG from outright failure. The government ended up owning more than a third of Citi’s equity (even so, the bank’s stock price fell from $55 to 97 cents), and more than 90 percent of AIG’s. Preventing these firms from failing clearly avoided a more general financial collapse that would have triggered the secondary failure of many more of the nation’s banks, in the pattern of the 1930s. The government also led the bankruptcy and reorganization of General Motors and Chrysler. The Federal Reserve stepped in to purchase new (for the central bank) classes of assets, such as commercial paper and residential mortgage–backed securities, for which the markets had all but ceased to function. Its doing so boosted the prices of outstanding securities remaining in private hands, thereby preventing another round of losses and potential failures, and also restored lending volume that had mostly disappeared. At the same time, however, the government did little to aid distressed retail borrowers: home foreclosures rose to record levels, and families who saw the prices of their houses fall below what they owe on their mortgages mostly remain underwater.

The increasingly pervasive government involvement in absorbing risk in all these ways—directly providing unemployment and health and other insurance, maintaining a retirement pension system, saving major financial institutions from failure, rescuing large manufacturers, damping business fluctuations—raises several fundamental questions. At the simplest level, under what circumstances should the government take on this function? As Levy’s historical account makes clear, one answer is that the modern economy presents risks that people would like to share, but for which purely private mechanisms are inadequate to the task. Yet this basic fact does not help us to say whether pensions should be public or private, for example—or, as last year’s debate over Obamacare highlighted, whether the government should provide health insurance, or merely require everyone to have it, or perhaps do neither.

A different answer is that under some circumstances the government can be more efficient at this task than the private sector. In contrast to the 10 to 15 percent for overhead and profit that is typical of most private insurance companies, the Social Security system’s overhead is less than 1 percent, and Medicare’s is about 2 percent; and there is no need for a profit margin. Under other circumstances, maybe the government is less efficient than the private sector. These differences are what much of the health-care debate has been about.

Yet another answer to the fundamental question of government involvement is that under some circumstances private markets with voluntary participation fail to work. A standard example is the self-
selection that sensible people rationally make when they have information that insurance providers either do not have or have but cannot use. If everyone who has reason to expect to live long and stay healthy opts out of an insurance program, only the sick and the short-lived will remain, and the cost of insuring only them will rise (so that then only the very sick and very short-lived will remain and the cost will rise yet further, so that...). Having one program for everyone, which only the government can enforce, avoids the problem. A philosophical question that Levy’s book implicitly raises, however, is whether compelling people to insure a risk, as Obamacare will do—and as we already do for the risk of injuring someone in an auto accident, or having one’s house burn down, or for medical risks for the elderly—violates their independent personhood.

A second set of questions turns on the fact that in some circumstances people’s behavior—for example, what care to seek for illnesses once they have them—depends on whether they bear the cost themselves or share it with others. This issue is central to the ongoing debate over entitlement spending. Simply referring to “entitlements,” as we customarily do, is an obfuscation that too often prevents the public discussion from addressing what is really at issue.3 The federal government has many entitlement programs, including food stamps, foster care, veterans’ benefits, farm supports, and retirement benefits for the government’s civilian employees and military personnel; but just two entitlement programs—Social Security and Medicare—together account for 69 percent of the total spending. Adding in what Medicaid pays for nursing-home stays by patients aged sixty-five or older brings that share to 73 percent. As the baby-boom generation continues to retire, and people live longer, and per-person medical costs continue to rise, the share of the entitlement budget devoted to the support and care of the nation’s retired elderly population will increase significantly.4

Many economists, and others, too, believe that the key to limiting this unsustainable increase is to restructure Medicare and Medicaid so that these programs’ participants will bear more of the costs of the care that they receive, and therefore will make more careful decisions about how much care to seek. But most people’s ability to pay out of their own pockets is very limited. The support and the care of the retired elderly is as important a moral issue as any that society faces. To what extent our society should place responsibility for that care on our government, and in what form, will be at the center of American politics for the foreseeable future.

A third set of questions stems more directly from the financial crisis, and in particular from the dramatic demonstration that some banks, and other financial institutions, too, are “too big to fail.” One issue is simply whether private firms whose failure would impose such large costs on the economy as a whole that the government could not allow them to fail should be allowed to exist. A variety of proposals—ranging from restricting banks’ participation in risky activities, to requiring that they hold more capital, to outright limits on banks’ size—is currently under debate.

A more basic issue in this regard is not whether the government should stand behind these private firms, as long as they are allowed to exist, but what implications follow from its doing so. If the government supports them in this way, are they really private firms? And if not, what is the justification for their earning returns consistent with private risk-taking? Recent work at the Bank of England shows that, for the twenty-eight banks around the world deemed “systemically important financial institutions” (and therefore too big to fail), the ability to borrow more cheaply because of their governments’ implicit guarantee currently increases the banks’ annual profits by some $500 billion. But that is substantially larger than these banks’ combined profit. If the profit they “earn” comes solely from a subsidy attributable to the government’s guarantee, who is entitled to keep it—the banks’ shareowners or the taxpayers?

Finally, a parallel set of questions focuses not on the banks and their shareowners, but on the managers hired to run them (and, for that matter, many companies in nonfinancial industries, too). The outsized compensation of top corporate executives has certainly attracted ample attention in recent years. So has the widening gap between these individuals’ pay and what everyone else receives. The average compensation for CEOs of S&P 500 companies is now $12 million, and the ratio of their pay to the average within their respective firms is more than 350 to one. What has received less attention—but what should, in light of the history of American attitudes toward risk recounted in Levy’s book, receive more attention—is the extent to which today’s large corporations are insulating their top executives (or these top executives are insulating themselves) from the consequences of failure. 

It is not that CEOs never lose their jobs. They are fired, or forced to resign, with some frequency. The question is what ensues when this happens. In many companies today, a fired CEO goes away with a financial settlement large enough to guarantee that he (and occasionally she) can permanently remain among the country’s economic elite without ever having to work another day. Failed executives receive going-away payments of $50 million, or $100 million, or more, sometimes after only a few years’ work. Sometimes the payment is an annuity: $5 million per year for the rest of one’s lifetime (and the life of one’s spouse) is not unheard of. In most cases, the settlement is merely what the CEO’s contract specifies. The standard explanation is that, without such a provision, no executive worth having would take the job in the first place.

In effect, corporate America has evolved a labor market that treats top executives differently from other people not just in the amount paid to them but in their exposure to downside risk. For three decades now, our society has been making the consequences of unemployment ever harsher for ordinary citizens. The theory, apparently, is that facing the disastrous consequences of losing one’s job creates an incentive to work hard and well. At the same time, the theory applied to our top executives is that their incentive to good performance is not just the enormous upside potential of success but also the generous cushion provided in case of failure. Reading Jonathan Levy’s book leaves no doubt which theory is more consistent with traditional American values.

Benjamin M. Friedman is the William Joseph Maier Professor of Political Economy at Harvard University and the author, most recently, of The Moral Consequences of Economic Growth (Knopf).

  1. Risk-sharing within families continues to be important. According to some estimates, even small families can internally insure against nearly three-quarters of the income risk associated with individual family members’ uncertain length of life.

  2. By 1832 there were 35 insurance companies listed on the New York Stock Exchange, and all were either marine or fire insurance companies.

  3. Although the word “entitlement” had appeared as a technical term in U.S. legislation as early as 1974, its current sense as a blanket term for federal safety-net programs is due to President Reagan.

  4. One favorable sign, however, is that—for reasons no one yet understands—the seemingly inexorable upward trend in the cost of health care has leveled off in the past few years. If this new trend continues, the savings to the government via Medicare and Medicaid would be $770 billion over the next ten years.