Bombs and Porn Are Bad Reasons to Build More Data Centers | The New Republic
but china!

Bombs and Porn Are Bad Reasons to Build More Data Centers

Is this big AI push supposed to help the U.S. kill more Iranian kids? To help school shooters kill kids at home? Or just to fill the internet with rubbish? We need better answers to this question.

An excavator stands amid construction equipment and piles of dirt, with mountains in the background.
George Frey/Bloomberg/Getty Images
An excavator at a data center under construction in Utah

Data center construction isn’t going as planned. Bloomberg reported earlier this month that nearly half of the 12 gigawatts in computing power worth of data centers planned for this year have been delayed or canceled. Just a third of those projects are currently under construction, the market intelligence firm Sightline Climate estimates in a forthcoming report. Less than a third of the 21.5 GW worth of data center projects announced for 2027 are currently under construction.

That’s thanks in part to shortages of electrical equipment like transformers and batteries. But many also face challenges from a growing, bipartisan backlash to data center construction. Maine’s legislature recently passed the country’s first-ever statewide moratorium on data center construction for projects over 20 megawatts, to last until November 2027. Similar bills have been introduced in at least a dozen states. The Milwaukee suburb of Port Washington voted by a margin of roughly 2-to-1 for a referendum requiring voter approval before the city can extend any preferential tax treatment to projects valued at or costing $10 million or more. The referendum was a reaction to the city approving tax incentives for a $15 billion data center project to be operated by Oracle and OpenAI. (That project will not be impacted by the vote.) In Festus, Missouri, last week, voters kicked out all four incumbents who’d voted to approve a $6 billion data center plan from the developer CRG.

Not all data centers are being built for AI hyperscalers. The International Energy Agency projects that roughly half of the electricity demand from new projects planned through 2030 will be for facilities equipped to meet needs for generative AI like ChatGPT, as opposed to the less energy-intensive data centers handling cloud storage and more traditional computing tasks. The upsides of those AI-specific projects aren’t self-evident, and there’s a growing divide between the glorious futures promised by big AI developers and what people see it actually doing—generating eerie school papers and TikTok content, for instance, or flooding X with AI-generated child pornography. In addition to concerns about rising electricity bills, air pollution, and noise, fights over data centers seem to be channeling deeper frustrations. What and whom, in other words, is all this stuff actually for?

OpenAI CEO Sam Altman last year wrote that “the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present.” On Thursday, meanwhile, Florida officials opened an investigation into whether OpenAI’s ChatGPT had assisted in the planning of a mass shooting last year at Florida State University, and the extent to which chatbots might “facilitate criminal activity, empower America’s enemies, or threaten our national security,” per Florida Attorney General James Uthmeier. Court documents examined by a local news outlet show that the suspected shooter messaged extensively with ChatGPT about video games, dating, his feelings of isolation, and—eventually, less than a year before the shooting last April—guns. On the day of the shooting, where two FSU students were killed, he asked, “If there was a shooting at FSU, how would the country react?” and “What time is it the busiest in the FSU student union?” ChatGPT responded that the busiest time at the student union is “typically between 11:30 a.m. and 1:30 p.m.”

News also broke this week that OpenAI is backing an Illinois bill that could exempt companies from liability in the event that frontier models—those trained with more than $100 million of computational costs—cause “critical harms,” like creating a weapon of mass destruction, killing more than 100 people, or causing at least $1 billion in property damage. U.S. bombs in February killed between 175 and 180 people at a primary school in southern Iran—mostly girls under the age of 12—with the help of an AI targeting system developed by Palantir for the Department of Defense. Since 2024, the Pentagon has awarded the defense contractor multiyear contracts for that system worth up to $1.4 billion.

On the more quotidian end of things, AI seems to be helping students cheat on their schoolwork, filling social media feeds with news of fake TV shows and bizarre AI fruit cucking videos, and leading otherwise rational people to fall in love with chatbots. Sloppily added large language model, or LLM, features in apps, email services, and search engines churn out useless summaries of two-line emails and false information spelled out in authoritative tones. While AI’s full impact on the U.S. job market remains “guesswork,” former Biden administration official Jennifer M. Harris argued last week, it’s deepening already historic levels of inequality. Investors are rewarding companies that announce AI-fueled layoffs with surging share prices. “What’s worse,” she adds, is that “much of the trillion-plus-dollar investment in the AI boom isn’t happening in the stock market at all—it’s happening in private funds out of reach to all but the wealthiest, most connected among us.”

Despite claims from AI developers that their technology will eventually solve climate change and run on renewable energy, for now—and into the foreseeable future—they are using a lot of gas. Meta is planning to fund the construction of seven gas plants to provide 5.2 GW worth of power to its Manhattan-size Hyperion data center complex in rural Louisiana. The state’s regulators previously greenlit Entergy to build three gas plants, generating 2.3 GW for the project. As part of Meta’s agreement with Entergy, it has also agreed to finance the construction of 240 miles of transmission lines, battery storage, and nuclear power upgrades. More speculatively, Meta made a “commitment” to “help” fund “up to 2,500 megawatts of new renewable resources.” As The Atlantic’s Matteo Wong notes, greenhouse gas emissions from data centers could more than double by the end of the decade—long before AI developers’ well-advertised investments in fusion power are likely to pay off. There is still scant data available on how much electricity data centers actually use.

Unsurprisingly, all this hasn’t made AI especially popular. A Quinnipiac poll published late last month found that just 35 percent of U.S. residents are either “very excited” (6 percent) or “somewhat excited” (29 percent) about AI. Sixty-two percent are “not so excited” (29 percent) or “not excited at all” (33 percent). Eighty percent of poll respondents were “very” or “somewhat concerned” about it, and 55 percent think AI will do more harm than good in their day-to-day lives. Nearly two-thirds think AI will do more harm than good in education. Seventy percent think AI will decrease job opportunities. Sixty-five percent of respondents—including 78 percent of Democrats and 56 percent of Republicans—would oppose building an AI data center in their community.

So, again, why is the U.S. embarking on a state-sponsored spending-and-building binge for a technology that most people here think will make the world—and their lives—worse? Data center developers and supportive politicians promise construction jobs and additional tax revenues that can translate into bigger municipal budgets and tax decreases for residents of the places where data projects are built. Data centers don’t employ huge numbers of people over the long-term, though, and tax upsides for their neighbors are often undercut by generous tax incentives offered to developers. The Texas Tribune this week reported that the Lonestar State is expected to lose out on $3.2 billion in sales tax revenue over the next two years as a result of tax exemptions offered to data center developers.

To make their case, AI boosters typically pitch their products in graver terms than just jobs and tax revenue. The Trump administration, prominent Democrats, and AI hyperscalers have all framed “winning the AI race” as a national security imperative, raising fears that China will beat the U.S. to achieve a mysterious state known as “artificial general intelligence,” or something even more powerful called “superintelligence.” These terms are not well defined, and neither is the material threat posed by China “winning” and the U.S. “losing.” The United States is not at war with China. China’s government does not seem especially eager to start a war with the U.S. Our government has in the last few months kidnapped a head of state, threatened to annex Greenland, and started a stupid, reckless war of aggression against Iran—a war in which it’s used AI to kill more than a hundred children. At home, ICE is using Palantir’s AI to hunt down and disappear migrants as the Trump administration demands universities hand over lists of Jews. Criticisms of China’s domestic and foreign policy shouldn’t obscure the fact that the U.S. government is already doing extraordinarily dangerous things with AI. The companies building it are under zero obligation to further the interests of the U.S. government, much less those of most of the people who live here. If something called superintelligence is indeed real, which seems doubtful, do we really want Sam Altman or Donald Trump—who threatened to wipe out an entire civilization earlier this week—to control it?

It isn’t a coincidence that AI hyperscalers in the U.S. have sold their models to the public, policymakers, and investors in terms of what’s likely to happen down the road. The prospect of a foreign power gaining access to a godlike, world-destroying entity certainly inspires more urgency than, say, B2B software, vibescoding, and AI therapists. But rather than taking executives’ predictions about an inevitable utopian/apocalyptic future at face value, conversations about the future of AI infrastructure should be grounded in what most people are presently getting out of it. For now, the answer is not much.