The Senate Judiciary Committee will begin its confirmation hearings for Judge Amy Coney Barrett next week. I am not looking forward to them, and for reasons that have nothing to do with Barrett herself. Supreme Court confirmation hearings as they currently exist are far from a productive use of time for the Senate, the nominee, or the American public. Fifteen years before her own hearings, Justice Elena Kagan described them as “a vapid and hollow charade, in which repetition of platitudes has replaced discussion of viewpoints and personal anecdotes have supplanted legal analysis.”
Why are confirmation hearings so miserable? Partisan and ideological divides play an obvious role. Since Robert Bork’s disastrous hearings in 1987, nominees are far more cautious when describing their own views. The nominees’ obligation to show some manner of judicial independence also hinders some lines of questioning. Would-be justices often decline to say how they would decide specific cases or issues, on the grounds that it would compromise their impartiality. As a result, the hearings often become an endless loop of questions about Roe v. Wade and other hot-button cases, to which carefully rehearsed answers designed to avoid answering those questions are offered in response. (In Barrett’s case, this manner of interlocution may be even less useful than ever before.) All of these factors, plus the immense stakes involved, make for a maddening and fruitless experience.
I’ve already proposed scrapping this system entirely: remove the president and the Senate from the process of selecting future Supreme Court justices, and instead choose them at random from among active federal judges to serve 18-year terms. But since this would take a constitutional amendment, it’s more of a long-term solution than an immediate form of relief. For now and going forward, it would be more useful if the senators avoided asking questions to which they won’t or can’t get any answers. Instead, they should explore a judicial nominee’s approach to history—and the lessons that can be drawn from it.
How the justices understand American history can influence how they decide cases. Instances abound just from the most recent term. In Ramos v. Louisiana, the court noted how white supremacists used nonunanimous verdicts in criminal cases to dilute African American influence on juries in its ruling that abolished the practice. In Espinoza v. Montana Department of Revenue, some of the justices pointed to the anti-Catholic bigotry that inspired state-level “Blaine amendments,” which denied state funds to religious schools. Justice Neil Gorsuch’s majority opinion in McGirt v. Oklahoma, which is perhaps one of the best-written decisions of the last decade, begins with an unforgettable opening line: “At the far end of the Trail of Tears was a promise.”
I would like to know about how a nominee thinks about history itself. A justice’s understanding of the revolutionary and founding eras can influence how they interpret the Constitution’s meaning today. Their perception of the Reconstruction era, for example, might shape not only how they read the Fourteenth Amendment but also how they decide civil rights cases on every level. And a would-be justice’s understanding of the Warren court could be instructive about their views on whether the constitutional revolution of the 1950s and 1960s went too far or not far enough.
It’s particularly important to hear the nominee’s opinions on when the Supreme Court got something wrong. Legal scholars often write about what they call the “anti-canon” of American constitutional law—the cases universally denounced by judges and scholars as not just erroneous but morally and ethically repugnant. The four best-known examples are Dred Scott v. Sandford, Plessy v. Ferguson, Buck v. Bell, and Korematsu v. United States. Others, like Lochner v. New York, are often denounced in strong terms as well. I would like to hear about when and why a prospective justice thinks the Supreme Court made a mistake and what lessons can be learned in hindsight from those errors.
In terms of history and the Supreme Court, Reconstruction may loom largest of all. Historians have long compared it to a second American Revolution of sorts—a comparison that is particularly apt when it comes to constitutional law. The ratification of the Thirteenth, Fourteenth, and Fifteenth Amendments set the nation on a path toward multiracial democracy, albeit one that would not be more fully realized until the mid-twentieth century. A campaign of white supremacist terrorism in the South, coupled with growing indifference among Northern whites, led to nearly a century of Jim Crow laws and racial apartheid.
The Supreme Court, which played a central role in translating the dream of Reconstruction into reality in the 1950s and 1960s, helped doom it in the 1870s. In U.S. v. Cruikshank, the justices overturned a series of federal convictions against the perpetrators of the Colfax massacre in Louisiana, concluding that it was the responsibility of state governments to provide justice. In practical terms, the ruling defanged the federal government’s ability to suppress white supremacist terrorism, especially when Democrats seized control of state governments. Later, in the civil rights era, the court stripped Congress of its power to pass anti-discrimination laws under the Thirteenth and Fourteenth Amendments.
Since parts of those rulings still carry precedential weight, nominees might be reluctant to challenge them or express open disagreement with them. (Justice Clarence Thomas, for his part, suggested that Cruikshank should be rejected in a 2010 case.) But they should be able to articulate why the courts should be particularly cautious when depriving the federal government of its ability to enforce civil rights measures on behalf of disfavored groups. At best, they should also be able to explain and understand Reconstruction in line with the current scholarly consensus and not through the discredited Dunning school that dominated twentieth-century education.
Other past cases raise questions about limits on the government’s power to violate reproductive autonomy and other fundamental human rights. In the 1920s, Virginia adopted a law that would allow for the involuntary sterilization of certain patients deemed “unfit,” most commonly those with intellectual disabilities. The law arose from the eugenics movement, the evil stepchild of the Progressive era, which purported to “improve” humanity by determining who was “fit” or “unfit” to bear children. To its eternal shame, the Supreme Court signed off on a sterilization order by a Virginia mental asylum for 18-year-old Carrie Buck in 1927.
“It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind,” Justice Oliver Wendell Holmes wrote for the 8–1 majority. “The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Three generations of imbeciles are enough.” Buck v. Bell is a chilling warning of the government’s potential indifference to basic human rights, its willingness to embrace scientific racism, and the courts’ potential complicity.
Some overturned rulings provide more obvious sources of reflection for nominees. Korematsu v. United States would give would-be justices the opportunity to explain the limits on the government’s power in times of crisis or war. Lochner v. New York and other rulings from that judicial era raise questions about whether the Constitution has a particular ideological preference, as well as the limits on the government’s power to impose economic regulations. Olmstead v. United States, where the court initially said warrantless wiretaps didn’t violate the Fourth Amendment, raises questions about whether the Constitution’s eighteenth-century provisions can apply in the digital age.
All of these rulings make for depressing reading and grim questioning. That’s why they may be just as important as the precedents that justices actually draw upon in their day-to-day work. If Amy Coney Barrett wins Senate approval, she will likely help decide hundreds or even thousands of cases on the high court. Statistically speaking, she will get it wrong from time to time. How she—or any other Supreme Court nominee, for that matter—handles the possibility that she could be wrong matters just as much as whether she gets it right.