You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

When Censorship Makes Sense: How YouTube Should Police Hate Speech

Late last week, Google yanked “The Innocence of Muslims,” from YouTube in Egypt, Libya and some other Muslim nations. By that point, an ambassador and three other Americans were already dead in Libya, while riots raged across the Middle East. Still, the company’s actions left behind an uncomfortable question: Should Google pull videos from YouTube just because they make people angry and violent?

Google was, in my view, right to suspend the video, given the clear and present danger of more violence. But Google’s content-removal process left much to be desired. It has become clear that Google needs a better system for dealing with hard speech questions.

YouTube, to be clear, isn’t an open forum (even if it sometimes seems that way). For one thing, Google uses an ingenious sex-detecting algorithm to preemptively yank porn. It also employs a complicated system to help copyright owners (mainly Hollywood) locate their works. Finally, the firm bans a long list of other content, including: “animal abuse, drug abuse, under-age drinking and smoking, bomb-making, graphic or gratuitous violence, gross-out videos, hate speech, predatory behavior, stalking, threats, harassment, intimidation, invading privacy, revealing other people’s personal information, inciting others to commit violent acts, and spam.”

Some of these categories, like drug abuse, are relatively easy to define. But others are harder, like “hate speech,” “gross out” and “inciting others.” Right now, Google decides everything itself, forcing the firm to be an arbitrary censor, which it hates, or in the awkward position of reacting to requests from the White House and other governments. A special team within Google, after a video is flagged, decides whether its content guidelines have been violated, but Google has also shown it will reach beyond its guidelines to yank content on a case-by-case basis. The latter process is decidedly ad-hoc.

A better course would be to try to create a process that relies on a community, either of regional experts or the serious users of YouTube. Community members would (as they do now) flag dangerous or illegal videos for deletion. Google would decide the easy cases itself, and turn the hard cases over to the community, which would aim for a rough consensus. Such a system would be an early-warning signal that might have prevented riots in the first place.

How might a regional, community-based system work? Like now, any user could nominate a video for deletion, and if it fell clearly within the categories above, it would be speedily deleted. But for the hard questions, Google could demand that the nominator argue its case to either a global (for all of YouTube) or regional (for country specific sites) community forum. YouTube users of good standing—those that actually upload videos on a consistent basis—would be allowed to comment, until some kind of rough consensus is reached. Without consensus, the video stays. If this system worked, in the case of “Innocence of Muslims,” someone could have made the case much earlier that the “movie” should be taken down in Muslim countries as "hate speech." And that just might possibly have just prevented some of what’s happening right now.

There are both theoretical and practical arguments against the system I am suggesting. For one thing, the community process might be more restrictive than some people would like. Alternatively, they might be too loose, or unrepresentative of the broader non-YouTube-using community. And perhaps most troubling, the system might just not “work” as a practical matter—it might not attract enough responsible people, particularly for regional sites, willing to opine on whether a video should be yanked or not.

The practical questions can be addressed by looking at a leading model for community content-yanking, which is Wikipedia. On Wikipedia, any user can propose the deletion of a page that does not fit Wikipedia’s content guidelines. The nomination is debated (on Wikipedia) until a rough consensus is reached, which it usually is. While that system may not work perfectly, it has kept Wikipedia from becoming Spampedia, a forum for ideological projects, or simply a tool for marketing companies who want to flog unknown products.

To be sure, attracting responsible community input isn’t easy, and it isn’t clear that YouTube uploaders are as involved as Wikipedia editors—but perhaps it’s worth a trial. If it doesn’t work, an alternative to a user-system would be regionalized panels of good citizens, acting as judges, who would be willing to opine on the hard questions, the way that panels of prominent authors decide what words should be in the American Heritage Dictionary.

As for the appropriateness of the YouTube community, the question who we’re comparing it to. The current system depends on a small, anonymous team of people in Mountain View who follow guidelines, who can be overridden from unspecified others within Google for any other reason. That system works for the easy questions, but is has become clear that Google needs something better.

Having an actual process as opposed to a single point of decision, would serve the interests of Google and the world. It might ideally create a YouTube that is more responsive to the different sensitivities of different parts the world, which would be nice. It might get messy at times, but we need to understand that, at some level, Google is trying to create a free speech jurisprudence, a project that the Supreme Court spent much of the 20th century working on. And that’s not easy, even for Google.