You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Machine Learning

Congress Is Racing to Catch Up With Artificial Intelligence

Lawmakers agree that the rapid evolution of this technology needs to be addressed—but they’re still getting up to speed on the details.

Anna Moneymaker/Getty Images
New Mexico Senator Martin Heinrich, the co-founder of the Senate Artificial Intelligence Caucus, is one of many lawmakers striving to get ahead of A.I. advancements.

As governments across the world mull how to regulate the artificial intelligence technology that is rapidly changing the cultural and political landscape, policymakers in Washington are still trying to decide what path to take—and acknowledging that they still have much to learn.

While the emergence of ChatGPT and other chatbots has fostered a ripple of concern among both experts and the general public, the simple truth is that A.I. technology is already embedded in daily life. It exists in the autofill response in an email, in the suggested television shows on a streaming service, in the facial recognition technology that unlocks your phone.

Four CEOs of companies developing artificial intelligence technologies met with Vice President Kamala Harris and, briefly, President Joe Biden at the White House last week, indicating that the White House is hoping to collaborate with the rapidly evolving sector while still expressing caution about its unknown capabilities. “What you’re doing has enormous potential and enormous danger,” Biden told the executives. The White House also plans to introduce policies to shape how the government uses A.I. systems, as well as invest in federal A.I. research.

Meanwhile, members of Congress are also beginning to consider how best to approach regulating A.I. technology. “A.I. is one of the most pressing and serious policy issues we confront today,” Senate Majority Leader Chuck Schumer said on the Senate floor last week, adding that he believed any action on the issue should be bipartisan. “It is critical that as we grapple with artificial intelligence, we should seek to invest in American ingenuity, solidify American innovation and leadership, enhance our national security, [and] ensure it’s done in a responsible and transparent manner.”

In some ways, A.I. is similar to social media: rapidly evolving and conceptually slippery to grasp, particularly for a somewhat gerontocratic legislature. “The one thing we can’t do is wait, the way we did with social media, and say, ‘Go do stuff, and we’ll figure it out later,’” said Senator Mark Warner, the chair of the Senate Intelligence Committee. In March, Schumer announced a framework to regulate A.I., requiring companies to submit their technologies to review and tests, but it’s unclear how close this measure will be to any final congressional proposal.

But the first challenge in devising policy to regulate artificial intelligence may simply be defining it. “It’s constantly evolving. It’s hard for the technical community, and certainly even more so regulators, to come down on a definition of what it is,” said Matt O’Shaughnessy, a visiting fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. (I transcribed my interview with O’Shaughnessy, as well as everyone else I spoke to for this piece, using an A.I. service.)

A.I. policy is thus often informed by the risk a technology presents. While the A.I. that suggests a Netflix series may not be considered serious enough to warrant regulation, the use of A.I. technology to screen job applicants could result in discriminatory hiring practices.

The European Union is currently considering a set of regulations for A.I. technologies, creating a framework of four different risk categories: unacceptable risk, high risk, limited risk, or no risk. Negotiations for this measure have been lengthy, and while the European Parliament last week backed stringent draft legislation, there are still several steps that remain before any agreed-to standards become law.

Still, the proposal would put the EU at the forefront of A.I. regulations. It would ban the use of A.I. in biometric identification systems in public spaces and predictive policing systems, for example, and increase transparency for generative A.I. programs like ChatGPT. However, its comprehensiveness could also create challenges: The EU must set a single standard for its 27 member countries, while the U.S. is likely to approach A.I. policy in a more piecemeal manner—whether by individual agencies determining their own policies or by Congress passing measures that address certain sectors.

“[The EU] has to set a single set of standards that the whole continent follows. The U.S. doesn’t have to do this,” said Alex Engler, a fellow in governance studies at Brookings, where he studies A.I. and emerging data technologies. “The U.S. approach will probably be more specific and, in some sense, better guidance. But it will also probably happen slower, and certainly not evenly, because we’re not passing one law that says you have to do all this at once.”

Senator Martin Heinrich, a co-founder of the bipartisan Senate Artificial Intelligence Caucus, told me that the EU and the U.S. had “very different historic regulatory cultures.” “We’re going to have to get something with 60 votes across the finish line here. I don’t think looking to the EU usually sets you up to do that,” Heinrich said, referring to the 60-vote threshold required to advance legislation in the Senate.

Representative Don Beyer, a Democrat who is taking courses in A.I. as he pursues a master’s degree in machine learning, told me that he believed “it’s unrealistic, maybe even naïve, to think that we can come up with covenants or a regulatory scheme out of the box.”

A.I. is developing so quickly, and we are behind the curve. My approach and my expectation is that we will find smaller pieces to get done this year, and then build on it next year,” Beyer predicted, highlighting bipartisan legislation that he has co-sponsored, which would prevent A.I. from being involved in nuclear launch decisions. He added that the EU approach, while it may go “too far” for American sensibilities, is “fun to study.” “It’s fun to look and see, what can we copy from it that would be accessible in this culture, and these politics?” Beyer said.

Merve Hickok, the director of the Center for A.I. and Digital Policy, argued that A.I. should be regulated using a rights-based approach. “These technologies impact fundamental rights in fundamental ways. Those rights and obligations should be clearly defined,” Hickok said, highlighting how A.I. is used in policing and immigration contexts.

That emphasis on protecting rights was echoed by Senator Michael Bennet, who has introduced legislation to require federal agencies to designate an official to oversee developing technologies, including artifical intelligence. A.I. technologies should be implemented “consistent with our civil liberties and our civil rights and our privacy rights, and that’s not going to happen by accident,” Bennet told me earlier this month. He has also introduced a bill to create a task force to review existing A.I. policies and develop recommendations for further regulations.

“I don’t think we should panic. But I think we should take a thoughtful approach with respect to the federal government’s engagement itself, and … finally begin to regulate some of the largest digital platforms in this country,” Bennet said.

The White House last year introduced a “Blueprint for an A.I. Bill of Rights,” a voluntary set of guidelines encouraging companies to use A.I. technologies responsibly and with more transparency, with a focus on protecting privacy and preventing discrimination. “It is a broad and important exposition on why A.I. is harmful and needs governance, and we did not have one of those before,” said Engler. However, because it is nonbinding, “some agencies have ignored it,” Engler continued: “To most federal agencies, A.I. is not their priority. It’s like, the seventieth thing on their list.”

O’Shaughnessy highlighted the A.I. Risk Management Framework formulated by the National Institute of Standards and Technology following a directive from Congress as a starting point for developing regulatory policy. “Going forward, Congress can look at some of those optional standards that are developed in this risk management framework, see where they are applied by companies and where companies might be ignoring them, and look for where binding regulation might be helpful to make sure that we’re maximizing benefits while mitigating harms,” he said.

A key issue in developing congressional policy also lies in understanding what risks are inherent in the technology, not to mention the basics of what A.I. is and does. “I think we’re all trying to get educated as quickly as we can,” Warner said. Senator Mike Rounds, the ranking member of the Senate Armed Services Subcommittee on Cybersecurity, told me last week that it is important to communicate “what the downfalls are of the implementation of A.I.,” naming privacy concerns around the video app TikTok as an example.

“We’ll try to educate folks about how serious it is, about how advanced it already is, but also the need for our nation to be a leader in the development of A.I., not just for the economy but for our defensive purposes as well,” Rounds said.

Although ideas for regulating A.I. are still nebulous, in the White House and in Congress, policymakers agree that it needs to be addressed. The ranking member of the Senate Intelligence Committee, Senator Marco Rubio, told me last week that “there’s no way to put that genie back in the bottle.”

“It will develop, probably a lot faster than the ability to legislate it,” Rubio said.