You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation
Kill Switch

The Best Safeguard Against Artificial Intelligence Is the Constitution

The president has the power to step in now before A.I. becomes a threat to national security.

Kevin Dietsch/Getty Images
President Joe Biden holds a meeting with his science and technology advisers to discuss the advancement of American science and technology, including artificial intelligence.

Is there good cause to believe that the further development of artificial intelligence systems, or A.I., might compromise our national security? Certainly, there are fears that we are hurtling into the unknown with great speed. A March 22 open letter, penned by such tech luminaries as Elon Musk and Steve Wozniak, warned of “profound risks to society and humanity” and called for a “pause” in further development. A month later, A.I. scientist Ian Hogarth sounded a similar refrain in The Financial Times, in which he recommended we slow the development of superintelligent “god-like AI,” which he described as less likely to be aligned with human values. These exhortations are highly reminiscent of another famous missive: the 1939 letter that Albert Einstein wrote, in which he warned of the danger of Nazi Germany developing the nuclear bomb.

Naturally, it is the case that A.I. may not destroy the human race—indeed, the odds are much against it, as stated in The Economist, which took the pulse of several A.I. experts and found that many saw zero risk of calamity—though others didn’t rule it out entirely. The Economist placed a bottom-line 5 percent probability of A.I. exterminating the human race and suggested that there were bigger concerns on the horizon. Still, 5 percent is serious enough: If the risk of being killed by crossing at an intersection were only 5 percent at a particular time of day, many would refrain; it would be an agenda item in a public policy debate. If the risk of commercial airline crashes neared 5 percent, whole fleets would be grounded.

So let us consider that 5 percent chance for a moment. The signers of the aforementioned March open letter may have made a compelling case for a pause, but they were naïve in one way that Einstein was not: Einstein didn’t beg private industry to self-regulate with the future of the human race at stake. Rather, he made his appeal to President Franklin D. Roosevelt, because he knew that he needed to get the government involved. We are in the same position now: Our national security state should decide whether there is to be “god-like AI.”

An open letter, begging the gods of A.I. to pause the creation of godlike A.I., is unlikely to have an effect for the simple reason that the profit motive makes it impossible for them to stop. That’s why these appeals must be made to those who’ve sworn an oath to uphold the Constitution, which vests them with the power to defend the country. It is a defensible exercise of the president’s war powers to take just enough control of A.I. companies, such as DeepMind, OpenAI, and others, such as the industry leaders identified by Hogarth. It is reasonable to declare a national emergency now, under the 1974 National Emergency Act or otherwise, if only to determine when, for the sake of national security, a pause should be in effect. In fact, not only does the Constitution permit a form of nationalization of these companies, it may even require it, especially if privately developed godlike A.I. were to, at some point, achieve an independent capacity, under private control, to declare and wage war.

The president is vested with the power to make several interventions in the name of national security. First off, he can issue an executive order to appoint nonvoting independent directors, paid by the government, to sit on the boards of companies such as OpenAI and DeepMind, as well as any known competitors. The executive order would empower these directors with the right to review all internal corporate documents, without exception, relating to the development of A.I., and give them some say in ensuring that A.I. projects are in alignment, if not with human values, at least with national security. The president can direct each appointed director to make regular reports—with redactions, if necessary, to protect private intellectual property—that offer a risk assessment on any matters that have direct or indirect implications for national security. The president would transmit such reports to Congress and make them fully available to selected congressional committees. That executive order can be set to expire in a year, unless renewed because of national security concerns identified by the independent directors, upon review by the secretary of defense.

Naturally, Congress can override any such executive order. But unless and until it does, this is otherwise an inherent emergency power of the president, just as taking control of the Manhattan Project from private enterprise would have been. Nor is such action the kind of outright nationalization that the U.S. Supreme Court barred when President Harry Truman seized the steel industry during the Korean War. In that case, Youngstown Sheet and Tube v. Sawyer, the court held that even in times of war, the war power did not justify the seizure of a private industry without the approval of Congress. But the steel industry was making steel: not developing a potential threat to the existence of the state. This is not a taking of private property subject to the Fifth Amendment but rather a collection of intelligence. And while it is no doubt a coercive intrusion, it is nevertheless one that leaves ownership of the enterprise in private hands, while curbing its ability to develop a godlike A.I. with—even inadvertently—the ability to achieve the same powers as the president in the matters of peace and war. This is a reasonable proposition. The fact of the matter is that under the Defense Production Act, the president has the power to conscript or require for government use as a necessary “supply” any kind of product, including technology, that may have a military application.

It is true enough that the United States by itself cannot control the development of godlike A.I. But if the president does take the actions set out here, it is likely that the European Union, which is similarly aware of the risks and already developing a framework for A.I. regulation, would follow suit. As some on the right in this country complain, the U.S. depends more than ever on the EU for global rules on antitrust and other regulation that checks corporate power here. As for Russia, thanks to the war in Ukraine, it may now have lost the personnel and capacity to take any lead in A.I. And while China remains at arm’s length from the rest of the international community, whatever A.I. research China will be involved in will be under the thumb of a state dictatorship with powerful incentives to keep it under some kind of control: a rare silver lining of despotism.

Godlike A.I. may be years away; so, in 1939, was an atomic bomb. But by 1945, the U.S. had dropped two of them on civilian populations. In the case of A.I., it is already 1939, as the president has already received warning from prominent scientists. Even if the Supreme Court were to declare illegal this use of the war power, there is the option, in a true emergency, to ignore the court’s decision. In 1861, President Abraham Lincoln defied Chief Justice Taney’s order that prohibited him from suspending habeas corpus clauses in the Constitution without the approval of Congress. Lincoln said: “Would not the oath be broken if the government be overthrown when it was believed that disregarding the single law would tend to preserve it?” The president is still charged with the duty of protecting the government, and—it may turn out—the very existence of any government under our control.