Arrest Mark Zuckerberg for Child Endangerment | The New Republic
Meta-Indictment

Arrest Mark Zuckerberg for Child Endangerment

Shocking new revelations about Instagram in a lawsuit against social media companies should pave the way for an ambitious prosecutor to file criminal charges.

Mark Zuckerberg wearing Meta Ray-Ban AI glasses
David Paul Morris/Bloomberg/Getty Images
Mark Zuckerberg wearing Meta Ray-Ban AI glasses during a presentation in September

Should Mark Zuckerberg be handcuffed—literally—for the threat his products pose to millions of children? That’s the inescapable question raised by a legal brief filed last month in a civil case against major social media companies.

The litigation, which alleges that social media platforms have been purposefully cultivating addiction among adolescents, has been working its way through the courts since 2022. But the details laid out in this new court filing, and reported recently by Time, contain genuinely horrifying claims about Zuckerberg’s Meta, the parent company of Facebook and Instagram. And they suggest that—in addition to the tort claims being pursued by the families, school districts, and state attorneys general behind this multidistrict litigation—the corporate executives responsible for these harms could and should be criminally prosecuted for child endangerment.

The plaintiffs’ brief alleges that Meta was aware that its platforms were endangering young users, including by exacerbating adolescents’ mental health issues. According to the plaintiffs, Meta frequently detected content related to eating disorders, child sexual abuse, and suicide but refused to remove it. For example, one 2021 internal company survey found that more than 8 percent of respondents aged 13 to 15 had seen someone harm themself or threaten to harm themself on Instagram during the past week. The brief also makes clear that Meta fully understood the addictive nature of its products, with plaintiffs citing a message by one user-experience researcher at the company that Instagram “is a drug” and, “We’re basically pushers.”

Perhaps most relevant to state child endangerment laws, the plaintiffs have alleged that Meta knew that millions of adults were using its platforms to inappropriately contact minors. According to their filing, an internal company audit found that Instagram had recommended 1.4 million potentially inappropriate adults to teenagers in a single day in 2022. The brief also details how Instagram’s policy was to not take action against sexual solicitation until a user had been caught engaging in the “trafficking of humans for sex” a whopping 17 times. As Instagram’s former head of safety and well-being, Vaishnavi Jayakumar, reportedly testified, “You could incur 16 violations for prostitution and sexual solicitation, and upon the seventeenth violation, your account would be suspended.”

The decision to expose adolescents to these threats was, according to the brief, an entirely knowing one. As plaintiffs allege, by 2019 Meta researchers were recommending that Instagram shield its young users from unwanted adult contact by making all teenage accounts private by default. Meta’s policy, legal, and well-being teams all echoed this recommendation, stressing that the policy would “increase teen safety.” But the primary response by Meta’s corporate leadership was to question how this policy would impact its profits. The company directed its growth team to analyze what a default private setting would do to engagement. They found it would have a negative effect—according to one employee quoted in the court filing, limiting “unwanted interactions” would likely cause a “potentially untenable problem with engagement and growth.” As a result, Meta failed to implement this safety recommendation until 2024, allowing billions of nonconsensual interactions between teenagers and adult strangers during the intervening four years. A significant enough number of these encounters were inappropriate, according to plaintiffs, that Meta had an acronym—“IIC,” short for “inappropriate interactions with children”—for them.

If your social media platform is facilitating so many inappropriate interactions between adult strangers and children that you need a shorthand to describe such encounters, then you should be liable for some of the resulting harm. But could that liability extend to the criminal sphere?

It depends, first of all, on the jurisdiction. Every state has some form of law criminalizing conduct that abuses, neglects, or endangers children. While some states limit this crime to parents or guardians, other states outlaw child endangerment more broadly. To take one example, Massachusetts’s child endangerment statute reads: “Whoever wantonly or recklessly engages in conduct that creates a substantial risk of serious bodily injury or sexual abuse to a child or wantonly or recklessly fails to take reasonable steps to alleviate such risk where there is a duty to act shall be punished by imprisonment in the house of correction for not more than 2.5 years.” This seems like an accurate way to characterize Meta’s creation of what one state’s attorney general has described as a “marketplace” allowing “pedophiles, predators, and others engaged in the commerce of sex” to “hunt for, groom, sell, and buy sex with children and sexual images of children at an unprecedented scale.”

While child endangerment laws were not originally written with social media companies in mind—Massachusetts’s law was passed in 2002 as a response to the Catholic sex abuse scandal—that state’s highest court made clear that the statute’s text encompasses any and all reckless conduct that creates a substantial risk of harming a child, noting, “If the Legislature had intended a narrower set of protections, it readily could have drafted the statute to accomplish that more limited objective.” Indeed, the threats allegedly facilitated by Meta are far more severe than many of the hazards—like exposing children to marijuana smoke or leaving them unsupervised in a house—that Massachusetts courts have deemed appropriately dangerous to constitute the crime of child endangerment in the past.

So in at least some jurisdictions it seems quite possible that Meta officials could be criminally prosecuted for the harmful effects they knew their platforms were having on young people. But there would need to be a lot of public utility in such a prosecution to make it worthwhile for a district attorney’s office with limited resources to take on some of the wealthiest Big Tech executives in the world. Should a local prosecutor accept this daunting challenge? I believe the answer to this question is yes, for two important reasons.

First, there is a chance that current civil litigation against social media giants may end up failing. The biggest obstacle these lawsuits face is a 1996 federal law called the Communication Decency Act. Section 230 of this law grants digital communications platforms a broad waiver of civil liability for the user-generated content they host. While plaintiffs are seeking to get around this shield with new litigation strategies—the suits discussed here focus on social media companies’ negligence in the design of their platforms and deception about the known harms of their products, rather than the actual content itself—it’s unclear whether this product-liability theory will succeed in piercing the immunity afforded thus far by Section 230.

But Section 230 only provides for civil immunity for social media companies. It doesn’t say anything about criminal liability. So criminal prosecution for child endangerment may offer a more straightforward—and, if worse comes to worst on the civil side, perhaps the sole—path to accountability for these bad actors.

Second, criminal prosecution could be a highly effective tool for forcing these companies to adopt more pro-social practices. Meta earned over $62 billion in net profits in 2024. Even a massive, multibillion-dollar settlement or civil judgment could potentially be swallowed by the company as the price of doing business. But I can guarantee that Zuckerberg does not want to spend any time in a state prison. Even the credible threat of a multiyear sentence for these corporate executives might be enough to significantly change Meta’s decision-making, in a way that few other remedies could.

Every day, in jurisdictions all across the country, people are prosecuted and incarcerated for committing the crime of child endangerment based on conduct that was far less aware, and resulted in much less harm, than what is being alleged of Meta’s corporate leadership. If the claims against these companies are true, then executives like Zuckerberg have absolutely engaged in reckless conduct that has created a substantial risk of harming young people. In other words, they have committed crimes—and the mere fact that they are wealthy and powerful should not allow them to escape accountability. Local prosecutors have a chance to win justice for teenage victims who have been endangered by these profit-seeking tech titans. Here’s hoping they take the opportunity.