You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

How a Bunch of Revolutionary War Reenactors Got Caught Up in Facebook’

“It’s people in 1776 in wool clothes with muskets that it takes a minute to load and shoot.”

Members of the First Virginia Regiment, a Revolutionary War living history reenactment group, participate in the annual George Washington Birthday Parade in Virginia.
Alex Wong/Getty Images

Rory Nolan remembers the first time he was banned from Facebook. It was October 8, 2020, the day that news broke that a militia group, in concert with an FBI informant, had planned to kidnap Michigan Governor Gretchen Whitmer. He was talking to a friend on Facebook Messenger when, a few minutes later, he got a text message from the same friend asking if Nolan had blocked him. Puzzled, he checked his Facebook account—he had been booted. He tried to appeal the ban, only to be issued an automated message that his account was permanently revoked “because it did not follow our Community Standards. This decision can’t be reversed.” Soon his Instagram account received the same treatment.

Nolan belongs to historical reenactment groups that sometimes dramatize Revolutionary War-era militias (you can begin to see the problem), and he manages the Facebook and Instagram pages for several of them. He tried to establish new accounts under new email addresses, but they didn’t last long before getting swept up in the same moderation process. Again, they were banned with no possibility of appeal. And like that, Nolan’s social media presence—and much of his social life—quietly winked out of existence.

Losing one’s Facebook account rates low on the scale of tragedy, but Nolan’s experience is revealing of the strange, often clumsy, and imprecise ways that Facebook tries to manage content on its platform. Many people don’t know they’ve violated a Facebook policy until they run into the buzzsaw of the company’s automated system. Sometimes they seem no more sophisticated than tracking certain forbidden keywords and flagging pages associated with them. Crudely designed, with little ability to appeal decisions to actual human beings, these systems seem to be more equipped to disable pages like Nolan’s than to respond to right-wing extremism and threats of violence.

But the story is also more complicated than that, as Nolan found, because while Facebook’s A.I. moderators might see participants in “living history” events as contrary to platform policy, a great deal of potentially objectionable material—including from actual militia members—still appears on Facebook and Facebook-owned apps. The overall impression is that Facebook’s enforcement policies and methods are both haphazardly conceived and opportunistically implemented.

For Nolan and his friends, historical reenactments—or “impressions” of historical events—have become a major part of their lives. Conducted in the spring and summer, they sometimes involve hundreds of participants performing for an audience. In one shot on a still-active Instagram account, reenactors in perfect period costume—minus the cloth masks—perform drilling exercises on the scenic coast of Newburyport, Massachusetts. They travel and camp throughout New England, sometimes with their families. Involving people from all over the political spectrum who generally share an interest in history, some group members have been doing this for 30-plus years. Fidelity and authenticity are valued: reenactors buy wool from England and India and make their own clothes. And everything—from fundraising to coordination to event planning—is done through Facebook and Instagram.

They consider themselves history nerds, not violent extremists. “This is what it actually is,” said Eric Brack, a close friend of Nolan who participates in groups like the 1st Regiment of Foot Guards, named for a British unit that served as the king’s personal bodyguards. “It’s people in 1776 in wool clothes with muskets that it takes a minute to load and shoot.”

Facebook’s slapdash content moderation has affected other groups who are often at the vanguard of where tech, free speech, and politics meet–namely sex workers, Palestinian activists, and political dissidents. Some of these decisions came from senior company leadership to curry favor with governments; others were the accidental byproduct of automated moderation systems, which Facebook has said have a 10 percent failure rate. The apparent result has been to muddle the rules of what’s acceptable on Facebook while applying censorship both indiscriminately and in a targeted manner, often harming people whose speech rights have the least protection on and off platform.

With better weather on the way and the reenactment season starting up again, Nolan and Brack say they miss the community they once had. “After a day fighting at Fort Ticonderoga and you’re enjoying a punch bowl with some strangers, you quickly make some new friends,” said Nolan. Now he feels cut off from the community that relied on him to manage some of their pages. (After receiving a list of questions from The New Republic, a Facebook spokesman said that the company had restored Nolan’s accounts. They did not explain what happened except to say that “the profiles were removed in error.”)

Most people banned from Facebook don’t have journalists inquiring about their stories, nor is every situation clear cut. As my colleague Melissa Gira Grant noted in a recent piece on platform censorship, “Political activism can be swept up in platform moderation efforts.” “Many of the people that are most harmed by the collateral damage of content moderation are people without a lot of power,” Evan Greer, deputy director of Fight for the Future, told her, “like Muslims whose content gets caught up in filtering tools allegedly targeting ‘terrorism’ content, as well as LGBTQ people and sex workers who get flagged as ‘adult’ content.

Good content moderation is not synonymous with frequent deletion of potentially illicit material. Instead, it’s about transparently implementing clear policies, providing avenues of appeal to actual human beings, and creating a safe environment for people to express themselves and communicate. As it is, Facebook’s moderation policies fail to achieve these goals, and even average users who might otherwise have no reason to consider their Facebook posts risqué are now worried about activating an algorithmic tripwire. (Nolan said that other friends of his had been sent to what they jokingly call “Zuck jail.”)

Nolan wonders if an enormous company like Facebook can actually handle the scale of the content moderation issues it faces, especially without investing more in actual human moderators. He is not alone. A study last year from New York University described Facebook’s content moderation systems as “grossly inadequate” compared to the volume of material uploaded to the platform. The study also called for hiring tens of thousands more human moderators and for providing them with better support and benefits for what can be a mentally and emotionally taxing job.

Other scholars have echoed the point that Facebook can do better on moderation; it just has to make its policies clearer and hire more people, thereby cutting into its ample profits, which amounted to more than $29 billion in 2020. As of yet, Facebook doesn’t seem willing to do that, with executives promising better A.I. systems just down the road. That may just provide more grist for prominent critics like Elizabeth Warren, who would prefer the social media giant be broken up into several smaller, more competitive companies.

“How much longer can these platforms exist if this is untenable, if you have billions of people on your website posting everything from fart jokes to beheading videos?” Nolan asked. “How are you able to hold all this together?”