You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

The Incredible Power and Learned Helplessness of Twitter and Facebook

The social media giants that deplatformed Trump claim to be unable to stop the spread of hate, lies, conspiracy theories, and fascistic plots.

Hannah McKay/Pool/Getty Images
Twitter CEO Jack Dorsey testified remotely during a Senate Judiciary Committee hearing in November.

Despite being the CEO of both Twitter and the payments company Square, Jack Dorsey has an aw-shucks habit of seeming uncomfortable with the power he wields. On Wednesday, he tweeted a stem-winding thread explaining Twitter’s decision to suspend President Trump’s account for inciting the deadly Capitol Hill riot. Saying he felt no pride over the choice, Dorsey painted himself, and Twitter, as having been backed into a corner.

“We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety,” he wrote. “Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.”

That Trump was an enormous source of disinformation, compulsively lying as he finger-painted his own reality, is tough to dispute. CNBC recently found that Trump’s most-engaged tweets often contained lies. More than that, though, Trump used Twitter as a bully pulpit from which to rile up his followers and delegitimize his perceived enemies, especially the mainstream media. It has long made sense for Twitter to boot him for violating the terms of use that it applies to all of its other users. (It might also make business sense, noted analysts.)

But deplatforming, while rooted in principles of protecting the speech rights of the vulnerable against abusers, disinformationists, and fascists, is a fraught process, with ever-shifting targets. When done by companies, it also lacks transparency and public accountability. As soon as Trump was kicked off Twitter—and Facebook, YouTube, Snapchat, and other social platforms—speculation began as to where he might go. Parler seemed like the most obvious candidate—until Google and Apple removed it from their app stores and Amazon Web Services stopped hosting it.

Deplatforming is the most effective way to stop popular social media users from using the organizing and broadcast tools of big platforms to spread hate, conspiracies, and lies. The alt-right troll Milo Yiannopoulos is sometimes cited as a positive example of deplatforming. After recordings emerged of him praising pedophilia—the last straw for even some of his erstwhile supporters—he was ejected from major social media platforms, lost a book deal, and was cut loose by Breitbart and left to beg on small alternative networks, including Parler, to support his legal fees. He’s since faded into insignificance—a damning fate for a right-wing exhibitionist in an attention-based economy.

But dramatic, wholesale bans of famous people, while sometimes justified, don’t address the everyday concerns affecting the bulk of platform users. Content moderation remains the industry’s great unsolved problem.

“Have you tried to moderate 15 million people?” Mark Weinstein, the founder of the social network MeWe, asked in an interview with OneZero. Weinstein discussed the travails of starting an independent, privacy-focused social platform only for it to become a haven for right-wing extremists pushed off big platforms. MeWe was investing in human moderators (to what extent, Weinstein didn’t say) but still found that militia members were using the site. The bottom line: It’s not easy or cheap to offer users the tools of online discussion and association and to police their behaviors.

Weinstein’s exasperated question could as easily be asked by Facebook CEO Mark Zuckerberg or by Dorsey about their services’ enormous user bases. But it would be a disingenuous query. Whereas Weinstein was a medium-scale internet entrepreneur shocked at what was unfolding on his service, Twitter and Facebook, chasing scale and profits, have failed to invest sufficiently in human-led moderation. (Promising yet-to-be-realized artificial intelligence solutions, Facebook has become notorious for outsourcing content moderation to overworked human contractors traumatized by the horrific material they must monitor.) For years, Facebook has known that its algorithms push people toward extremist groups and misinformation, but it didn’t cause concern because it produced more user engagement. As Representative  Alexandria Ocasio-Cortez said on Instagram this week, speaking about who shares blame for the Capitol Hill riot, “Zuckerberg, too, creating recommendation engines, just funneling millions of people into white supremacist groups and organizations. He recommended it, he built the platform for it, he accelerated this.”

She’s right: Facebook must clean up its mess. But as tech giants purge extremists from their networks, liberals should feel a pit of discomfort that a few monopolistic companies, including those like Amazon that operate the infrastructure that powers many websites and apps, control who gets to speak and operate online. There will always be edge cases, and how these systems are constructed and managed matters enormously (as in the case of Facebook’s recommendation systems, which push people to join more extreme groups). But we have already landed in troubling territory, where not just individual users but whole social platforms can be wiped off the digital map, almost by fiat. After Parler, it may be Telegram, Zello, or Clouthub—or MeWe or other services we know nothing about. The rules for platform owners are sometimes as hazily defined as they are for the platform’s users.

In his defense of banning Trump, Dorsey wrote that “a company making a business decision to moderate itself is different from a government removing access, yet can feel much the same.” In other words, this isn’t censorship, but Twitter is dealing with fundamental issues of power and coercion. When it comes to regulating online speech, government and business feel dangerously adrift and equally unaccountable. To find a way out, platforms have to clearly and publicly define their terms of use, invest in more human content moderation, and apply it to everyone. But one day soon, we’ll have to reckon with the fact that it’s not just Trumpist demagogues and their extremist followers who represent a problem. Behind them lurk Silicon Valley monopolists whose unchecked power lies in the hands of a few unelected CEOs.