On Tuesday afternoon, Facebook announced that it had shut down 32 pages and accounts for what it calls “coordinated inauthentic behavior.” Over the last 16 months, these pages and accounts, which were followed by nearly 300,000 users combined, paid $11,000 to run 150 ads and created dozens of events. They appear to resemble those used by Russians during the 2016 election: They target many of the same issues, like race, and created similar events, including protests. Facebook said it was unable to conclusively prove who was responsible for the accounts, but a company representative told members of Congress that Russia was suspected. It is currently working with the FBI.
These interference campaigns are growing more sophisticated. While the content they’re distributing is similar to what was circulated in 2016, those responsible have gotten better at covering their tracks. In 2016, for instance, Russian operatives paid for Facebook ads in roubles. The most recent accounts suspended by Facebook “used VPNs and internet phone services, and paid third parties to run ads on their behalf” to obscure their identities.
Facebook’s response is as notable as the evolution of the campaigns themselves. Shortly after the 2016 election, the company dismissed reports that Russian operatives had used the platform to influence the election. “To think [Facebook] influenced the election in any way is a pretty crazy idea,” Zuckerberg said on November 10, 2016, before suggesting that only a “small amount” of fake news circulated during the campaign. In the intervening 18 months—and particularly after the Cambridge Analytica scandal—the company has taken efforts to show that it is taking both user privacy and disinformation campaigns more seriously. Publicly reporting even a relatively small operation suggests that the company is now using transparency as part of its public relations campaign.