The AI Industry Is Discovering That the Public Hates It | The New Republic
fallout

The AI Industry Is Discovering That the Public Hates It

If there was any doubt over the brewing public backlash to this technology, the last few weeks have erased it.

OpenAI CEO Sam Altman speaks at the BlackRock Infrastructure Summit
Anna Moneymaker/Getty Images
OpenAI CEO Sam Altman spoke at the BlackRock Infrastructure Summit in March in Washington, D.C.

On April 10, the house of OpenAI CEO Sam Altman was attacked with a Molotov cocktail by 20-year-old Daniel Moreno-Gama. The suspect, who was arrested the same day, had written a manifesto warning of the existential threat of artificial intelligence. In his missive, he advocated for killing the CEOs of AI companies, and he referred to himself as “butlerian jihadist” on Instagram (a reference to a war against machines in Frank Herbert’s Dune universe).

Three days prior in Indianapolis, an unknown perpetrator fired 13 shots into the home of local Democratic councilman Ron Gibson while his 8-year-old son was home. Neither were hurt, but a note reading “No Data Centers” was left on the doorstep. Gibson had lent his support for a potential data center project in his district. There have not yet been any arrests in the case.

Both incidents were frightening examples of abhorrent, politically motivated violence. But the reaction, at least on social media, seemed to revel in it

The mood exemplified by inflamed Instagram commenters on these incidents was further reinforced on April 13 when Stanford University released its annual Artificial Intelligence Index, which provides a yearly snapshot of where the industry stands.

In the report, one of the most standout contrasts was the gulf between what AI experts predict for AI’s future and the public’s reaction to the industry’s designs. On jobs, 73 percent of experts were positive about the long-term effect, with 69 percent positive about the long-term effect on the economy. Among the public, those numbers were 23 percent and 21 percent, respectively, with nearly two-thirds of Americans thinking that AI would lead to fewer jobs over the next 20 years.

A separate survey, released in March 2026 by Gallup, also showed a sharp increase in negative attitudes toward AI among Gen Z. According to the poll, the percentage of Gen Zers who felt excited about AI had dropped from 36 percent to 22 percent, while the number who felt angry about it increased from 22 percent to 31 percent.

These numbers and actions point in the same direction: a rapidly growing populist backlash toward AI, which tech journalist Jasmine Sun defined as “a worldview in which AI is viewed not only as a normal technology, but an elite political project to be resisted … a thing manufactured by out-of-touch billionaires and pushed onto an unwilling public.”

Naturally, violence is never an answer, nor is it a politically effective tactic. But you also cannot ignore how the tone-deaf public messaging of the AI industry has helped to contribute to this reaction.

For years, CEOs like Altman and Anthropic’s Dario Amodei have very publicly oscillated between two suboptimal scenarios. In one, AI exterminates humanity with a biological super-weapon. In the other, AI either takes your job entirely or creates an economy where your only option is to downshift into the gig economy.

These pitches may be perfect for attracting attention at tech conferences or funding rounds, but they utterly ignore the daily concerns of regular Americans, at a time when the job market (especially for newer graduates) is incredibly shaky; economic gains are concentrated among the top 0.1 percent; and the price of food, housing, and, now, gasoline all continue to skyrocket.

This is the environment in which the AI industry is very publicly asking for hundreds of billions of dollars in continued investment, as well as a massive data center buildout that has had significant effects on local populations’ electrical bills. For example, in Virginia, the epicenter of the U.S. data center boom, residential electrical rates have been projected to increase by up to 25 percent by 2030.

These costs could be ignored, or even accepted, if there was a clear idea of how precisely AI would streamline and improve the workplace—or offer any tangible public benefit significant enough to make these underlying trade-offs acceptable. But the answers to these questions remain extremely tenuous. According to a February 2026 paper by the National Bureau of Economic Research, 80 percent of companies that have begun actively using AI have reported no impact on company productivity. A separate, widely cited 2025 MIT study revealed that 95 percent of corporate AI pilot programs received zero return.

Even within tech and coding, one of the areas where AI is reported to have the most promise, there’s the question of whether the productivity gains reported can be trusted. In a provocative GitHub post, machine-learning engineer Han-Chung Lee argued that even rosy internal numbers that do show AI-assisted productivity gains are suspect, as they’re produced to hit adoption targets no one can effectively audit.

This isn’t to say that AI doesn’t show immense and possibly incredibly valuable potential, especially bearing in mind that ChatGPT (which can be considered the first mainstream demonstration of AI technology) was only launched in November 2022. It’s natural for new technology to have a bumpy adoption period as both users and designers stress-test its strengths and limitations in the real world.

But the gap between how AI companies talk about themselves and how the general public has experienced the technology (and its side effects) has grown into a chasm, and now the results of these divisions are starting to show; data center projects canceled or delayed; an industry that is less popular than ICE or Donald Trump; and now, violent acts against AI leaders. 

In its defense, Big Tech has realized the extent of the potential problems that AI could pose to regular Americans. Earlier in April, for example, Open AI released an Industrial Policy White Paper, which included suggestions such as the creation of a Public Wealth Fund for all Americans to share in AI growth, revamping social safety nets, and investing in real-time measurement of how AI affects work. In January, Microsoft released a Community-First AI Infrastructure Initiative, promising to subsidize utility rates and minimize water use in communities where it was building data centers.

But it’s one thing for AI companies to make lofty promises in press releases, and another thing entirely for them to follow through consistently on equitable AI development, even when it means undercutting their business advantage.

Here again there is a gap between public statements and on-the-ground facts. Microsoft’s Community-First Initiative sounds great but does not have any form of independent accountability mechanism built in. OpenAI’s new white paper signals a move toward progressive tech policy, but its president, Greg Brockman, has funneled millions into a SuperPAC opposing state-level AI regulation efforts. OpenAI is also currently supporting a state legislature bill in Illinois (Senate Bill 3444) that would shield it from large-scale harms caused by the AI models (Anthropic, for its part, opposes the bill).

These examples underscore the pattern that Ronan Farrow noted in his recent New Yorker exposé about Sam Altman—that he would regularly publicly support one position and then quickly reverse course when it seemed like doing so would benefit his company.

If Altman, Amodei, and their Big Tech peers want to rebuild public trust and create a genuine technology that benefits the public, then the path forward isn’t another white paper or postulating about the existential risks of their technology. It’s sustained, verifiable action: genuine transparency about what their products can do, a willingness to accept meaningful regulation and responsibility even at financial cost, and real democratic input from communities on the growth of data centers. Otherwise, this burgeoning AI populism movement will continue to scale up—as will the potential for violence that accompanies it.