You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

The Sad Implosion of Google’s Ethical A.I.

The surveillance-capitalist giant fired some of its most popular researchers—and revealed its true colors in the process.

A woman walks past Google's headquarters in London, England
Dan Kitwood/Getty Images

In December, Alphabet, Google’s parent company, fired Timnit Gebru, a celebrated artificial intelligence researcher and one of the most accomplished members of her field. Gebru, one of the few Black women in a senior position at Alphabet (where less than 3 percent of employees are Black), had been an integral member of the company’s Ethical A.I. team, a well-regarded, diverse group of high-achieving researchers who explored issues like racial bias in facial recognition or how “fairness” might be defined in machine-learning systems, which undergird everything from automated content moderation to résumé screening. The team was considered an important component of Google’s effort to be an ethically engaged research institution challenging some of the orthodoxies of the field. Its work also contributed to several Google products, particularly in the area of machine learning, such as a tool for reducing bias in machine-learning models and (in a lighter vein) a celebrity facial-recognition app.

Now all of that is in disarray, and the future of ethical A.I. research at Google is in question. Google has spent the three months since Gebru’s firing flailing to account for its behavior. In January, Axios reported that Google was investigating researcher Margaret Mitchell, who founded the Ethical A.I. team, for reportedly collecting information related to Gebru’s dismissal. On February 5, Mitchell expressed public concern about Gebru’s firing. Two weeks later, Mitchell was fired. (Google claimed Mitchell was let go for violating company policies.) The company conducted an investigation, refused to release the results, and promised minimal reforms. A recent Axios headline said it all: “Google tweaks diversity, research policies following inquiry.”

For critics of Google—who increasingly hail from within the company, including as part of the nascent Alphabet Workers Union—tweaks aren’t enough. They want major changes in policy and personnel that may test how far a company like Google is willing to go—or whether it can even be an incubator of ethically minded, noncommercial research. Dr. Alex Hanna, a member of the Ethical A.I. team who has tweeted critically about the company’s actions, recently wrote that she wasn’t doing so for sympathy or attention: “I’m doing it because I’m wanting y’all to join in to radically transform (and destroy, where necessary) these institutions.”

It’s unclear whether Google can be made into a more socially responsible actor, much less radically transformed. But progressive-minded tech workers think it’s worth trying. While exhibiting noble intentions—and often acting at great risk to their careers—they might find, in the end, that Google’s leadership doesn’t care for their reforms and that ethical technologies have no place in a market-dominating surveillance-capitalist giant. Perhaps Google can only be destroyed.

Dr. Hanna’s call for revolutionizing Google was echoed in a post published today by Google Walkout, an organizing group that led a protest in 2018 against a $90 million payout to Andy Rubin, the Android creator who was accused of sexual harassment. In a broadside titled “The Future Must Be Ethical: #MakeAIEthical,” Google Walkout wrote on Monday that “Google has chosen to dismantle its AI ethics work, making it clear that the company will only tolerate research that supports its bottom line.” The group detailed the mistreatment of Gebru and other researchers and called for a boycott of Google by academics and job-seekers. It also asked legislatures to strengthen whistleblower protections, calling them “a powerful tool for guarding against the worst abuses of the private entities which create these technologies.”

The Google Walkout post was accompanied by a report on NBC News about Google’s treatment of racial, gender, and personnel issues. NBC talked to at least 10 people who shared similar experiences: “When their complaints about how the company handled racial and gender discrimination reached human resources, they were both given the same advice: undergo mental health counseling and take medical leave.” One of the Google Walkout organizers also said that she had been demoted for her advocacy.

Those who took medical leave often found, after returning, that their positions had changed or that they had new managers. But the practice, an ersatz way of dealing with systemic issues, continued. “Going on leave is so normalized,” said a former Google employee.

Another recent report in The Washington Post detailed Google’s botched efforts to recruit from historically Black colleges. HBCUs were downgraded in Google’s recruiting system, and one of its lead recruiters said she was fired for “raising concerns about bias against HBCU students in the interview and hiring process.”

What emerges from these stories of racial and gender bias, summary firings, and indifference to employees’ mental health is that Google is pretty much like any other big company. Paying lip service to diversity initiatives, it acts aggressively to defend its own interests and ultimately sees its employees, even those with glittering public reputations, as disposable. As the authors at Google Walkout detailed, regarding the treatment of Margaret Mitchell, “During the five weeks that Dr. Mitchell was suspended, Google’s ‘global investigations’ unit intimidated remaining members of the team, some of whom were coerced into interrogations outside of work hours.”

A company with a global investigations team that interrogates employees and rifles through their email hardly seems like it can play host to research that may criticize the company’s own practices. The problem with Google’s Ethical A.I. venture is not the outspokenness of its team members but that the company isn’t built to tolerate dissent or the kind of free-thinking it ostensibly celebrates.

That is what made the Ethical A.I. team seem like a civic-minded exception at a company whose bottom line still depends, after all these years, on tricking people into clicking on ads. Gebru and Mitchell’s team offered something else, as Google Walkout wrote—namely, “people who could connect their lived experiences to practices of power, subjection, and domination which get encoded into AI and other data-driven systems.”

This kind of language—speaking of power and domination—must seem foreign to senior executives at a monopolistic giant like Google. For a publicly traded company whose ambition is to ingest all of the world’s information and monetize it for investors’ gain, achieving power and domination is the point. Subjecting every bit and byte to the surveiller’s gaze is Google’s modus operandi; it’s how it sees the world; it’s how it makes gobs of money to subsidize press-friendly initiatives like moonshot projects and sophisticated A.I. research.

Unfortunately, the Ethical A.I. team seems to be just another tool for burnishing Google’s reputation. As soon as these researchers became inconvenient—as soon as their diverse radicalism transformed from a P.R. coup to a threat—they became the enemy within, their credentials suspended, their colleagues investigated, their concerns undermined and pacified with promises of medical leave and therapy.

“Google is not committed to making itself better, and has to be held accountable by organized workers with the unwavering support of social movements, civil society, and the research community beyond,” wrote Google Walkout.

Googlers, especially the company’s A.I. researchers, have every right to their rage. But perhaps the overriding lesson of this disaster is exactly what Dr. Hanna gestured at: Google is compromised beyond repair. When the revolution ultimately comes to Silicon Valley, it will be to destroy Alphabet and its peers, not to rescue them.