You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

The Delete Squad

Google, Twitter, Facebook and the new global battle over the future of free speech

Jim Wilson/The New York Times/Redux

A year ago this month, Stanford Law School hosted a little-noticed meeting that may help decide the future of free speech online. It took place in the faculty lounge, where participants were sustained in their deliberations by bagels and fruit platters. Among the roughly two-dozen attendees, the most important were a group of fresh-faced tech executives, some of them in t-shirts and unusual footwear, who are in charge of their companies’ content policies. Their positions give these young people more power over who gets heard around the globe than any politician or bureaucrat—more power, in fact, than any president or judge. 

Collectively, the tech leaders assembled that day in Palo Alto might be called “the Deciders,” in a tribute to Nicole Wong, the legal director of Twitter, whose former colleagues affectionately bestowed on her the singular version of that nickname while she was deputy general counsel at Google. At the dawn of the Internet age, some of the nascent industry’s biggest players staked out an ardently hands-off position on hate speech; Wong was part of the generation that discovered firsthand how untenable this extreme libertarian position was. In one representative incident, she clashed with the Turkish government over its demands that YouTube take down videos posted by Greek soccer fans claiming that Kemal Ataturk was gay. Wong and her colleagues at Google agreed to block access to the clips in Turkey, where insulting the country’s founder is illegal, but Turkish authorities—who insisted on a worldwide ban—responded by denying their citizens access to the whole site for two years. “I’m taking my best guess at what will allow our products to move forward in a country,” she told me in 2008. The other Deciders, who don’t always have Wong’s legal training, have had to make their own guesses, each with ramifications for their company’s bottom line.

The session at Stanford concluded with the attendees passing a resolution for the formation of an “Anti-Cyberhate Working Group,” then heading over to Facebook’s headquarters to drink white wine out of plastic cups at a festive reception. But despite the generally laid-back vibe, the meeting, part of a series of discussions dating back more than a year, had a serious agenda. Because of my work on the First Amendment, I was asked to join the conversations, along with other academics, civil libertarians, and policymakers from the United States and abroad. Although I can’t identify all the participants by name, I am at liberty, according to the ground rules of our meetings, to describe the general thrust of the discussions, which are bringing together the Deciders at a pivotal time.

As online communication proliferates—and the ethical and financial costs of misjudgments rise—the Internet giants are grappling with the challenge of enforcing their community guidelines for free speech. Some Deciders see a solution in limiting the nuance involved in their protocols, so that only truly dangerous content is removed from circulation. But other parties have very different ideas about what’s best for the Web. Increasingly, some of the Deciders have become convinced that the greatest threats to free speech during the next decade will come not just from authoritarian countries like China, Russia, and Iran, who practice political censorship and have been pushing the United Nations to empower more of it, but also from a less obvious place: European democracies contemplating broad new laws that would require Internet companies to remove posts that offend the dignity of an individual, group, or religion. The Deciders are right to be concerned about the balkanization of the Internet. There is, moreover, a bold way to respond to that threat. The urgent question is whether the Deciders will embrace it.

At Facebook, the deciders are led by Dave Willner, the head of the company’s content policy team. His career provides a kind of case study in how the Deciders’ thinking has evolved. Now 28, Willner joined Facebook five years ago, working night shifts in the help center, where he answered e-mails from users about how to use the photo uploader. Within a year, he had been promoted to work on content policy. Today, he manages a crew of six employees who work around shared desks at Facebook’s headquarters in Menlo Park; rather than a global hub for content control, their space, festooned with colorful posters, more closely resembles a neater-than-usual college dorm. Toiling under Willner’s team are a few hundred “first responders” who review complaints about nudity, porn, violence, and hate speech from offices in Menlo Park, Austin, Dublin, and Hyderabad, India. (Willner is also married to a fellow Facebook employee who now leads the User Safety team, responsible, among other things, for child protection and suicide prevention; one imagines rather heady dinner chatter.) Facebook had only 100 million users when Willner was hired, compared with the billion-plus it has now. Each day, they upload more than 300 million photos alone; every week, Facebook receives more than two million requests to remove material. (The New Republic’s owner was a Facebook co-founder.)

When I first met Willner at the Stanford meeting, he wore an orange T-shirt, a gray striped sweater, blue corduroy trousers, round glasses, and a bookish beard—looking very much like the former anthropology and archeology major that he was before starting at Facebook. He took a class about Islam in his senior year, which he says comes in handy in his current job. At the time Willner joined Facebook’s content policy team, the company had no rules on the books for what speech violated its terms of service. So Willner decided to write them himself. He chose as his model university anti-harassment codes, since he himself had just graduated from college. But he soon found that vague standards prohibiting speech that creates a “hostile environment” weren’t practical. The Facebook screeners scattered across three continents brought vastly different cultural backgrounds to their roles and had to rule on thousands of pieces of content daily. The sheer range and complexity of the judgment calls that had to be made compounded the challenge: Is this person naked? Is a photo of Hitler racism, or political commentary? Is it bullying to post a photo of someone distorted through Photoshop? Is posting a photo of a gun a credible threat of violence? What if the gun is from the cover of a rap album?

Willner had read John Stuart Mill in college and understood the crowning achievement of the American First Amendment tradition, which allows speech to be banned only when it is intended—and likely—to incite imminent violence or lawless action. By contrast, as Willner was learning, European law draws a tighter line, prohibiting so-called group libel, or speech that offends the dignity of members of a protected class and lowers their standing in society. Willner decided that neither method would do: Both the U.S. focus on the speaker’s intent and the European focus on the social consequences of their speech would be too subjective for a 22-year-old content reviewer in Dublin or Hyderabad to apply in 20 seconds.What Facebook needed, he came to believe, was a hate-speech policy that focused on concrete, easily categorized actions, so that the decision to remove controversial content, or to escalate the dispute to Willner and his colleagues in Silicon Valley, could be based on nothing more than the information contained within the form that Facebook users file to complain about offensive posts and applied like an algorithm. He sought an engineer’s response to a thorny historical and legal problem—a very Silicon Valley approach.

At first, it didn’t go well. To try to spell out what qualified as a hateful post, Facebook hired an outside firm to write an “Operations Manual for Live Content Moderators,” which was subsequently leaked. Some of the distinctions made by the document were ridiculed by the blogosphere for being jesuitical: “Blatant (obvious) depictions of camel toes and moose knuckles” were banned in the “sex and nudity category,” while the graphic content category held that “bodily fluids (except semen) are ok to show unless a human being is captured in the process.”Furthermore, the draft standards seemed to ban all “Holocaust denial which focuses on hate speech” and “all attacks on Ataturk (visual and text)” around the world, even though Holocaust denial is illegal only in certain countries, including France and Germany, and attacking Ataturk is outlawed only in Turkey. In response to the uproar, Facebook fired the consulting company, and Willner redoubled his efforts to minimize the opportunities for subjective verdicts by his first responders.

Eventually, the project led to Facebook’s most important free-speech decision: to ban attacks on groups, but not on institutions. The current community standards declare: “We do not permit individuals ... to attack others based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or medical condition.” But Facebook allows caricatures that depict members of protected groups doing unflattering things, as well as attacks on their faith or leaders. It’s only when a user categorically reviles a protected group that he crosses the line: “I hate Islam” or “I hate the Pope” is fine; “I hate Muslims” or “I hate Catholics” is not. The distinctions might be seen as a triumph of reductionism. But they have empowered the company to resist growing calls for the wholesale deletion ofspeech that foreign governments and their citizens consider blasphemous.

Facebook’s new policy was dramatically tested last September, when the company refused to remove Innocence of the Muslims, the anti-Muhammad video that was initially blamed for causing the Benghazi riots that led to the death of the American ambassador to Libya. After watching the video, Willner and his colleagues concluded that, because nobody said anything explicitly denigrating of Muslims, there was nothing to ban.

As the world watched footage of the body of Christopher Stevens being dragged through the streets, YouTube reached a similar decision. Despite allegations that the riots had been caused by an Arabic-language version of the video posted on the site, it turned out that an English version of Innocence of the Muslims had been in circulation since July. YouTube had determined that the clip didn’t violate its terms of service, which by then were similar to Facebook’s: “Sometimes there is a fine line between what is and what is not considered hate speech. For instance, it is generally okay to criticize a nation, but not okay to make insulting generalizations about people of a particular nationality.” As the violence spread from Libya to Egypt, YouTube temporarily restricted access in those two nations, because of the confusion on the ground. But the company refused to delete the video around the world, even as Egyptian leader Mohamed Morsi, speaking at the United Nations, called on YouTube to do just that.

In a separate U.N. speech, invoking the American free-speech tradition, President Barack Obama rejected Morsi’s idea that the video could be banned simply because it was blasphemous: The First Amendment, he suggested, prohibits the government from taking sides in religious disputes. Instead, in the name of protecting U.S. foreign policy interests, the Obama administration asked YouTube to reconsider its conclusion that the video didn’t violate the company’s terms of service. By exerting this subtle pressure, Obama came close to a version of the heckler’s veto, urging for the film’s removal because of its potential to provoke riots. U.S. courts, despite Obama’s demands, discourage the government from suppressing speech because of its likely effect on an angry mob; judges generally require the authorities to control the audience, not muzzle the speaker. In this case, of course, the mobs fell well outside of U.S. jurisdiction, and the link between the video and potential violence also wasn’t clear. In fact, subsequent investigation called into question the claims of causality that had seemed obvious early on.

Like Facebook, Google and YouTube were right to focus on the content of the film, and right to conclude that, unless the incitement to violence was obvious—say, in the form of a tagline reading, “RISE UP IN VIOLENCE AGAINST MUSLIMS”—the Innocence video should remain as widely available as possible. Had YouTube made a different decision, links to the video from the many news stories that mentioned it would have been disabled, denying millions of viewers across the globe access to a newsworthy story and the chance to form their own opinions. In the heat of the moment, both the White House and the content teams at Facebook and YouTube had to make judgments about the same inflammatory material. From a free-speech perspective, the young Deciders made better decisions than the president of the United States.

The meetings that the deciders have been holding at Stanford and elsewhere trace their origins to an earlier gathering half a world away. It was convened in 2011 by the Task Force on Internet Hate of the Inter-parliamentary Coalition for Combating Antisemitism, an initiative with an unwieldy name but a crucial mission: to try to get European parliamentarians and law-enforcement officials to work together with American civil libertarians, the Anti-Defamation League, and the leading Internet companies in shaping standards for online expression. The venue was the Houses of Parliament in London, in a paneled room near the top of the Big Ben clock tower.

After some spirited discussion, the group trooped down a winding stone staircase to the visitors’ gallery overlooking the House of Commons, from which the task force watched our chairman, Member of Parliament John Mann, deliver a blistering summary of his position on the regulation of online speech. “Freedom of expression is not always a good thing,” he told his colleagues in the House. “The Internet is now the place where anti-Semitic filth is spread.”

Because of its historical experience with fascism and communism, Europe sees the suppression of hate speech as a way of promoting democracy. Paradoxically, it has increasingly begun to pursue this goal by legislative and judicial fiat. More than 20 European countries have signed a protocol on cyber-crime that calls on member nations to expand the existing criminal penalties for “acts of a racist and xenophobic nature committed through computer systems.” The Council of Europe has also pushed for increased hate-speech regulation. It’s because of moves like those that some Deciders are worried, as one of them put it, that “we may end up in a situation where Europe slides into a situation currently occupied by Turkey, Pakistan, Saudi Arabia, and India”—countries in which claims of offensiveness can be deployed as a tool of oppression.

A recent book, The Harm in Hate Speech, vividly confirms the Deciders’ fears. It was written by Jeremy Waldron, a New York University and Oxford professor who is a vocal champion of the European approach and its most prominent defender for American audiences. Waldron is best known for his longstanding opposition to judicial review: He believes that legislatures, rather than courts, should take the lead in formulating public policy. But this faith in the power of legislation to protect fundamental rights makes him naively optimistic about the capacity of legislatures (rather than Deciders) to balance the competing values of dignity, privacy, and free speech. He notes, accurately, that the U.S. is a global outlier in not regulating group libel and sympathetically invokes laws in countries like the United Kingdom, Germany, and France that prohibit expressions of racial and religious hatred even when there’s no immediate prospect that they will provoke violence. He maintains that hate speech creates what he calls “an environmental threat to social peace.”

Waldron’s argument has a remarkable blind spot: It virtually ignores the Internet. He begins his book by imagining a Muslim man walking with his two young daughters on a city street in New Jersey, where they are confronted with an anti-Muslim sign. Waldron believes that allowing these posters on street corners will convince members of vulnerable minorities “that they are not accepted as ordinary good-faith participants in social life.” But like the European regulators who share his views, Waldron seems unaware that the most significant free-speech debates today don’t take place on street corners, or lampposts, or sandwich boards. They take place online, where a person’s social networks and RSS feeds can filter out many unwelcome views—but where the risks that overregulation will open the door to suppression of political expression are exponentially higher than in the offline world. The secret police can’t eavesdrop on every whisper of revolution. Armed with a Great Firewall, on the other hand, repressive governments can block entire categories of information.

And they’re determined to do so. At a December meeting in Dubai, for example, a majority of the 193 countries that make up the U.N.’s International Telecommunication Union approved a proposal by China, Russia, Tajikistan, and Uzbekistan to create ominous “international norms and rules standardizing behavior of countries concerning information and cyberspace,” as a description of the measure provided by the Chinese government puts it. Waldron, who endorses an earlier U.N. resolution condemning religious defamation while emphasizing the need to protect ideological dissent, would of course never go that far. But the thing about slippery slopes is that, in practice, they can prove hard to avoid. The Dubai meeting highlights the danger of addressing hate speech on the borderless Internet by expanding international regulation: It may be authoritarian dictatorships, not enlightened democracies, who end up writing the new rules.

Waldron offers a defense of free-speech regulation for the nineteenth or early twentieth centuries that threatens the openness of the Internet in the twenty-first. He can’t clearly tell us, for example, whether his definition of hate speech would permit or ban the anti-Muhammad cartoons that Facebook refused to take down after they were first published by a Danish newspaper in 2010. Here is his torturous analysis: “In and of themselves, the cartoons can be regarded as a critique of Islam rather than a libel on Muslim; they contribute, in their twisted way, to a debate about the connection between the prophet’s teaching and the more violent aspects of modern jihadism.” But, he adds, “They would come close to a libel on Muslims if they were calculated to suggest that most followers of Islam support political and religious violence.” He then offers this hedging conclusion: “So it might be a question of judgment whether this was an attack on Danish Muslims as well as an attack on Muhammad. But it was probably appropriate for Denmark’s Director of Public Prosecutions not to initiate legal action against the newspaper.” That byzantine verdict, offered after the fact, is all very well for Denmark’s Director of Public Prosecutions, but Waldron’s opaque standard would be impossible for an Internet first responder to apply in a matter of seconds. And Web companies have another, better reasonfor rejecting European-style prohibitions on group libel, with their complicated calculations about the social consequences of hate speech: Even if they could be applied by Internet screeners, they would open the door to vast subjectivity and to a less open world.

The deciders, of course, have blind spots of their own. Their hate-speech policies tend to reflect a bias toward the civility norms of U.S. workplaces; they identify speech that might get you fired if you said them at your job, but which would be legal if shouted at a rally, and try to banish that expression from the entire Internet. But given their tremendous size and importance as platforms for free speech, companies like Facebook, Google, Yahoo, and Twitter shouldn’t try to be guardians of what Waldron calls a “well-ordered society”; instead, they should consider themselves the modern version of Oliver Wendell Holmes’s fractious marketplace of ideas—democratic spaces where all values, including civility norms, are always open for debate.

Some of the Deciders understand this. At a hate-speech panel in Houston in November, Jud Hoffman, Facebook’s global policy manager, told the audience that his company was tightening its policies, introducing a new system for identifying speech likely to provoke violence. Rather than examining the context in which speech arises, Hoffman said the company now looks for evidence of four objective standards to determine whether a threat is credible: time, place, method, and target. If three of the four criteria are satisfied, the company removes the post or video. This refined approach, Hoffman stressed, helps to protect users against the heckler’s veto, preventing speech from being based on the predicted reaction of the audience. It also avoids Waldron’s murky inquiries into the effect of speech on a group’s social status.

The company that has moved the furthest toward the American free-speech ideal is Twitter, which has explicitly concluded that it wants to be a platform for democracy rather than civility. Unlike Google and Facebook, it doesn’t ban hate speech at all; instead, it prohibits only “direct, specific threats of violence against others.” Last year, after the French government objected to the hash tag “#unbonjuif”—intended to inspire hateful riffs on the theme “a good Jew ...”—Twitter blocked a handful of the resulting tweets in France, but only because they violated French law. Within days, the bulk of the tweets carrying the hash tag had turned from anti-Semitic to denunciations of anti-Semitism, confirming that the Twittersphere is perfectly capable of dealing with hate speech on its own, without heavy-handed intervention.

As corporate rather than government actors, the Deciders aren’t formally bound by the First Amendment. But to protect the best qualities of the Internet, they need to summon the First Amendment principle that the only speech that can be banned is that which threatens to provoke imminent violence, an ideal articulated by Justice Louis Brandeis in 1927. It’s time, in other words, for some American free-speech imperialism if the Web is to remain open and free in twenty-first century.

As it happens, the big Internet companies have a commercial incentive to pursue precisely that mission. Unless Google, Facebook, Twitter, and other Internet giants draw a hard line on free speech, they will find it more difficult to resist European efforts to transform them from neutral platforms to censors-in-chief for the entire globe. Along with tougher rules on hate speech, the European regulators are weighing a sweeping new privacy right called “the right to be forgotten.” If adopted, it would allow users to demand the deletion from the Internet of photos they’ve posted themselves but come to regret—as well as photos of them that have been widely shared by others and even truthful but embarrassing blog comments others have posted about them. The onus would be on Google or Facebook or Yahoo or Twitter to take down the material as soon as a user makes the request or make the bet that a European privacy commissioner—to whom requests could be appealed—would determine that keeping the material online serves the public interest or provides journalistic, literary, or scientific value. If the companies guess wrong, they could be liable in each case for up to 2 percent of their annual incomes. A European Commission press officer stresses that each member country would choose how to implement the penalties, but for Google, the fines could hit $1 billion per incident.

Invoking a version of the right to be forgotten, an Argentinian judge in 2009 ordered Yahoo to remove racy pictures of Argentinian pop star Virginia da Cunha that were leading users to pornographic sites when they searched for her name. Claiming it was too technologically difficult to remove only the photos, Yahoo removed all references to her on its Argentine servers, so that, if you plug “da Cunha” into the Yahoo Argentina search engine now, you get a blank page and a judicial order. While Yahoo eventually won on appeal, the big Internet companies don’t want to host blank pages—their business models depend on their ability to ease the free exchange of information. But the right to be forgotten, if put in place, could turn them into the equivalent of TV stations with weak signals, resulting in shows that forever flicker in and out. The Deciders would bolster their position in the fight if their own guidelines more strictly limited the kind of speech they will voluntarily delete.

When I spoke with Nicole Wong at Google five years ago, she seemed a little uneasy with the magnitude of the responsibility she had taken on. “I think the Decider model is inconsistent,” she said. “The Internet is big, and Google isn’t the only one making these decisions.” The recent meetings, though not intended to produce a single hate-speech standard, seem to have bolstered the Deciders’ belief in the necessity of embracing the challenges of their unique positions and, perhaps in some cases, how much they relish the work. “I think this is probably what a lot of people who go to law school want to do,” Willner told me. “And I ended up doing it by accident.”

Meanwhile, the quest for the perfect screening system continues. Some of the Internet companies are exploring the possibility of a deploying an algorithm that could predict whether a given piece of content is likely to cause violence in a particular region, based on patterns of violence in the past. But hoping that the machines will one day police themselves amounts to wishful thinking. It may be that U.S. constitutional standards, applied by fickle humans, are the best way of preserving an open Internet.