You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Skip Navigation

How YouTube Became the Worldwide Leader in White Supremacy

When Google promises to "curb" extremism on its lucrative video platform, it means nothing more than keeping advertisers happy.

Photoillustration/Getty Images

It took the shock of Trumpism to finally awaken complacent liberals to a whole raft of ugly truths about twenty-first-century America—and about the last truly great American invention, the unregulated internet. Somewhere during primary season in 2016, it began to dawn on us en masse that social-media platforms like Facebook, Twitter, and Reddit—the kinds of places liberal utopians and Silicon Valley hucksters had long assured us would be magical boons to an “open society” and the progressive cause—had morphed into far more potent delivery systems for intolerance, terrorism, white supremacy, and right-wing fake news. And when the “alt-right” brought its race war to Charlottesville earlier this month with horrific results, it became clear to one and all that what began as an Internet phenomenon had become a menace to society IRL. 

It’s not as though we hadn’t been warned. Social scientists have been telling us for decades that terrorism and extremism grow out of social networks. In 2007, legal theorist Cass Sunstein eloquently anticipated the ill effects of Internet filtering on our democracy in a prescient essay that he expanded into a cautionary book called Going to Extremes: How Like Minds Unite and Divide. “As a result of the Internet, we live increasingly in an era of enclaves and niches,” he observed, and “enclaves of like-minded people are often a breeding ground for extremist movements.” Such movements once had to involve a measure of physical human interaction, and access to information (and misinformation) that could be hard to come by. With the Internet, the process of radicalization has been accelerated many times over.

During the campaign, YouTube was mostly exempted from our belated hand-wringing, soul-searching, and boycott-threatening, despite the platform’s massive and ever-expanding reach: 1.5 billion viewers a month and counting, with 400 new hours of video uploaded by users every minute. That began to change in the backwash of Trump’s election. Earlier this year, mainstream news outlets started reporting what should have long been obvious: Good ol‘ lovable YouTube was hosting millions of hours of well-watched intolerance, much of it far uglier than anything Bill O’Reilly, Sean Hannity, or Mark Levin could get on the air. “Yes,” wrote BuzzFeed reporter Joseph Bernstein, “the site most people associate with ‘Gangnam style,’ pirated music, and compilations of dachshunds sleeping is also the central content engine of the unruliest segments of the ascendant right-wing internet, and sometimes its enabler.”

Once reporters and left-wingers started venturing out of their own YouTube bubbles and sniffing around the rest of the platform, they found a house of anti-democratic horrors—one that seemed to contain an infinite number of rooms. Was it merely a coincidence, folks began to ask, that when you searched for “Holocaust,” the top 10 results directed you to Holocaust-denying content and anti-Semitic screeds? Was it just happenstance that the racists, misogynists, anti-Semites, and conspiracy-mongers of the right, from Alex Jones to wildly popular but relatively obscure propagandists, had made YouTube their digital home?

It was hard to blame YouTube, exactly. Unlike its owner, Google, with its “Don’t Be Evil” code of corporate conduct (nixed in 2015 by Google’s holding company, Alphabet), YouTube’s original motto was “Broadcast Yourself.” Until recently, the platform was unusually honest about what it’s there for; as Christos Goodrow, its “director of engineering for search and discovery,” told Business Insider in 2015, “We believe that for every human being on Earth, there’s 100 hours of YouTube that they would love to watch. And the content is already there. We have billions of videos. So we start with that premise and then it’s our job to help viewers to find the videos that they would enjoy watching.”


And this, YouTube does incredibly well. “YouTube recommends right-wing videos to people watching similar right-wing videos,” Zack Exley, a former Bernie Sanders campaign adviser, writes in a recent study of a prominent alt-right YouTuber. In the same way, YouTube sends left-wingers into a bubble of left-wing videos, fundamentalist Christians into a fundamentalist Christian bubble, and people who love to knit into a vast universe of fellow knitters. Among other things, that’s made it easy for the platform to host tons of content that some would find deeply offensive, and elude controversy—because YouTube only recommends what it thinks you want. And if you don’t go looking for neo-Nazism, or videos of decapitations, you won’t be directed to them. You’d never know they’re there.  

But those videos were there, all the time, ready for any 15-year-old alienated white boy to find with a couple of keystrokes. He gets what he thinks he wants, too—and then some.

This was part of the bargain of an open internet, but it wasn’t one that most of us had given much thought to—even if we’d stumbled onto some of the ugly stuff from time to time. After all, who could possibly take this dude seriously?

Quite a few people, as it turned out. This little “spoof” from Paul Joseph Watson, the British hunk of the alt-right “manosphere,” had garnered nearly 900,000 YouTube views as of last Friday. (Other well-trafficked Watson masterpieces include “JK Rowling Is a Vile Piece of Shit” and “MTV Says Black People Can’t Be Racist.”) Watson’s channel has more than one million subscribers, making him part of a growing fraternity: Without breaking a sweat—though you would definitely want a shower afterward—you can easily find more than a hundred far-right YouTubers with a million-plus total views.

“If people are sorted into enclaves and niches, what will happen to their views?” Sunstein asked back in 2007. “What are the eventual effects on democracy?” Now that white nationalism has become a mainstream phenomenon, and its most powerful political champion occupies the Oval Office, we have at least one grim answer to that question. We just don’t have the slightest clue about how to address the root of the problem.

Once liberals got a long-overdue whiff of the stench emanating from the other side of YouTube, Google had itself a little P.R. problem that soon turned into a big one when The Times of London, the Wall Street Journal, and others exposed a much dirtier secret hidden in plain view: YouTube wasn’t just offering up millions of hours of hate speech, but rewarding the most successful propagandists with a cut of the revenue from those video ads you have to wait impatiently to “skip” before getting, say, your “33 Fun Facts About Slavery” (#5: “There Were Just as Many White Slaves as Black Slaves”). Worse, some of the YouTube ranters were being paid—in one case, millions—to produce noxious content for YouTube’s “preferred” channel.

At first, despite a particularly loud outcry in Western Europe, Google pooh-poohed the whole “financing of hate” business. Extremists, said the head of Google Europe, were only making “pennies not pounds” off the major brands running ads on their channels. But when advertisers started to pull their brands from the platform in March and April—eventually about 5 percent of YouTube’s advertisers joined in the “boycott”—Google realized it had a problem on its hands. It’s one thing when the Daily Mail and BuzzFeed highlight your hateful content; it’s another when AT&T and Wal-Mart and Toyota start pulling out because you’re running their ads on (in Toyota’s case) “A 6000 Year History of the Jew World Order.” YouTube stood to lose some $750 million from an advertising boycott, analysts estimated—a mere drop in Google’s ocean of ad dollars, but a lousy “indicator” of YouTube’s future profitability.

Since then, Google has done what any self-respecting mega-corporation would do in a situation like this: Take a series of well-publicized half-measures to establish itself as “advertiser-friendly,” win Wal-Mart back, and get the carping press off its case. (The alt-right has inadvertently done its part to make Google look better, too, organizing—then canceling, post-Charlottesville—nationwide protests of the company for firing an engineer who wrote that women in tech are biologically inferior.) Google changed the kinds of videos that can carry advertising, blocked ads on videos with overt hate speech (which depends entirely on your definition of “overt,” and mostly means calls to violence), announced it was funding 50 “counter-extremist” NGOs to help it find new and creative ways to curb extremist content, and said it would recruit more “Trusted Flaggers”—unpaid volunteers charged with (often inaccurately) “flagging” the worst of the worst for YouTube to deal with.

As part of this familiar ritual of corporate self-absolution, Google also symbolically “fired” some of the best-known YouTubers it had paid to produce videos. The first big name to go, as a result of the initial Journal expose in February, was its most popular star, with 53 million subscribers, the model-handsome Swedish Gamer PewDiePie (real name: Felix Kjellberg). His offenses were, in the realm of YouTube extremism, minor: Some cheeky Nazi references, some “gratuitous” sex and violence, and one particularly rank stunt in which he paid “fivvers”—people who say they’ll do anything on video for $5—to unfurl a sign reading “Death to the Jews.”

What “firing” meant, in this case, was “demonitizing”—shunting PewDiePie off the preferred channel and pulling ads from his videos. This, in turn, gave one of Google’s big clients, Nissan—which had paid PewDiePie for an ad of its own—a chance to serve up some self-righteous PR as well. “We strongly condemn this highly offensive content and will not work with him again,” said a mortally offended Nissan spokesperson. Google, too, could claim to have “dealt with” the PewDiePie threat. But YouTube had already made him an international star. And his content remains wildly popular on YouTube, even without that “preferred” status, and thus indirectly lucrative (with all the clicks he gets) for Google and all its advertisers; in July, one of his videos garnered almost 10 million views; none of the 23 he posted during the month had an audience of less than 3 million. And what about “Death to the Jews,” which brought the weight of Google down on him? Just two of the many re-uploads on YouTube have 600,000 views between them, on top of the millions who’ve seen the original. Take that, PewDiePie!

One of the reasons that YouTube is poised to overtake all of television in audience size, and hours watched, is its seeming benignity. Most liberals—certainly most over the age of 40—still tend to regard Fox News as the primary flame-fanner of the intolerant right, rivaled only by Breitbart News and (for the especially woke) Alex Jones’s InfoWars. So does the mainstream media. While we were busy sharing “Reggae Shark,” the platform we knew and loved was fast becoming the Fox News of the new white nationalism—the most efficient tool for spreading bigotry to the masses that’s yet been invented. 

Just as we largely ignored the political punch of talk radio in the ’90s and ’00s—who could see Michael Savage as a political sage, for heaven’s sake?—we’ve been slow to recognize the (this sounds strange, just saying it) unique menace to democracy, civil society, and plain human decency that YouTube presents. 

Now comes the hard part: Figuring out how in God’s name to “respond,” to defang the crypto-Nazi propagandists and effectively troll an army of professional trollers. Google tells us it can be done: Not only can jihadis be shut down, but the budding white nationalists can be un-indoctrinated, by means of the very same technology that created the networks that spawn them. Ross Frennett, CEO of Moonshot CVE, a counter-extremist data firm that’s working with Google, expressed wild optimism last fall in a Newsweek op-ed. “By marrying big data with personal empathy,” he wrote, “our generation can starve extremist organizations of their ability to recruit, and they will wither on the vine and die.”

That’s Silicon Valley speak for “technology can fix everything”—including societal problems that platforms like YouTube have made demonstrably worse. But the most promising fix dreamed up so far, known as the “Redirect Method,” shows the serious limits and pitfalls of such an effort. Redirect sounds good: Google finds out what people want to see when they search for ISIS material (and, coming soon, also “inflammatory religious and supremacist” content), and places advertisements alongside the results that point to counter-information, theoretically disrupting the ideological echo chambers.

But what makes extremists extreme—whether they’re leftists, right-wingers, fundamentalist Christians, or anti-vaxxers—is a ferocious resistance to anything that contradicts their worldview. When you spend some time with young white nationalists on YouTube, Reddit, or 4chan, you understand that counter-information is nothing but fodder for validating their beliefs. And an attempt to force it on them only reinforces the whole weltanschauung of the new white supremacists: that there’s a massive global conspiracy to disempower white people and destroy Western civilization, accomplished by clamping down on free expression and hiding the dirty “truth” about race, gender, global finance—everything.

There are more pragmatic problems, too. Already, YouTube’s light policing of content has resulted in banning journalists and social activists. What happens now that the parameters have expanded to quash “supremacist” and “inflammatory religious” content? As Kieron O’Hara wrote at Slate, in a piece about Redirect, users quickly adapt to such technological tricks: “In a future world where such techniques were known about and understood, wouldn’t we just stop clicking on ads, especially when seeking edgy or transgressive content?”

Of course we would. In trying to “solve” the problems it’s helped create, Google is also working against human nature—against our tendency to “self-select” and organize ourselves into enclaves, our deep-seated need to belong and to believe. All the human qualities, in other words, that make social media so powerful (for ill and good) in the first place. Of course, Google will only take its counter-extremist efforts so far—far enough to increase profits, rather than drive them down. YouTube can make itself “advertiser-friendly.” It can inoculate itself from lawsuits filed by victims of extremist violence. It cannot change hearts and minds that are already hardened. That’s our job.