Within hours of the first U.S. and Israeli weapons exploding in Iran on Saturday morning, at least 153 people, many of them children, according to the BBC, died in an explosion at a girls’ school in Southern Iran. The bombing was reported originally by the Iranian news agency. Israel said it wasn’t aware of any IDF operations in the area, and a U.S. spokesman said, “We take these reports seriously.”
We don’t know if the children died because of Israeli or U.S. weapons. But it might not matter. The two militaries have been working together on the planning for this attack and have been sharing technology that Israel has been practicing with on the civilian population of Gaza for more than two years.
Assuming the report is accurate, it means that we are immediately witnessing the fullest expression of the most inhumane weapons of the twenty-first century: Autonomous bombs and missiles. Their “autonomy” refers to the fact that humans need not be “in the loop” in any meaningful way when deciding where to target or whether to launch such weapons. A combination of human intelligence collected over time, geolocation of mobile phones, and recent images taken by satellites, drones, or people who post images on social media sites contribute to the data these systems digest to guess if an enemy combatant is present at the suspected target. Military officers merely outsource their own moral and military judgment to proprietary systems. Then things blow up.
When such systems get it wrong, they get it very wrong. Civilians, often children, pay the price for the errors of autonomous systems. We have seen it for five years in Ukraine. We have seen it for three years in Gaza. We see it in real time this week in Iran. And it’s unconscionable.
There is a phrase that has quietly become one of the most consequential in American national security law, and it appears in no statute, no executive order, no treaty. It is a phrase that the Pentagon began demanding AI companies accept as a condition of doing business with the United States military: Any lawful use.
Last week, Anthropic, one of the most advanced artificial intelligence laboratories in the world, declined to accept those words without restriction. On February 26, CEO Dario Amodei released a statement listing conditions the Pentagon was imposing on contractors and said: “[T]hese threats do not change our position: we cannot in good conscience accede to their request.”
The next day, the Trump administration reached for instruments of punishment normally reserved for the People’s Republic of China. President Trump posted on Truth Social that “the leftwing nut jobs at Anthropic” had made “a DISASTROUS MISTAKE” and directed every federal agency to immediately cease all use of Anthropic’s technology. Defense/War Secretary Pete Hegseth declared Anthropic a “supply chain risk to national security,” a designation typically applied to Huawei and other companies deemed to be extensions of hostile foreign states. Any contractor, supplier, or partner doing business with the U.S. military was immediately barred from any commercial activity with Anthropic.
Anthropic had asked for two things: a contractual guarantee that its generative AI model, Claude, would not be used as part of autonomous weapons systems, and a guarantee that it would not be used for mass domestic surveillance of American citizens. These were not new conditions. Anthropic had maintained them explicitly in its usage policy since June 2024, before the Pentagon contract worth up to $200 million was signed in July 2025. The administration knew the terms. It signed anyway. And then it decided, months later, that those terms were intolerable.
To understand what “any lawful use” means in practice, it helps to understand what it is designed to eliminate: The possibility that a private company could tell the United States military how its technology may or may not be used. In the Pentagon’s view, once a tool is purchased, the buyer sets the terms of its application. The vendor’s values, safety commitments, and ethical frameworks become, at the moment of transaction, irrelevant. The military has its own lawyers. It has its own review processes. It has its own standards. And given the degradation of legal safeguards and restrictions on the entire executive branch in the last year, almost any act of depravity or mass murder could be ruled “lawful” by a Pentagon that has purged itself of its most moral and ethical lawyers and leaders and a Supreme Court devoted to maximizing Trump’s autocracy.
The same logic—that internal military review is sufficient to govern the deployment of powerful technologies—underwrote the expansion of the NSA surveillance state revealed by Edward Snowden. It underwrote the algorithmic targeting programs in Yemen and Somalia, where AI-assisted kill lists generated strikes that killed the wrong people with a regularity that official reviews consistently declined to examine.
In his February 26 statement, Amodei said: “Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” This technical conclusion is shared by a significant portion of the AI research community, grounded in the basic observation that large language models hallucinate, misidentify, and fail in ways that are not fully predictable. In a military context, an unpredictable failure is a dead civilian with no accountable author.
The military wants a powerful tool, available at scale, deployable at speed, unconstrained by the values of its designers. It wants an AI that will do what it is told without the inconvenience of a conscience embedded in its terms of service.
The category of “supply chain risk”—previously occupied chiefly by companies suspected of channeling data to Beijing—now encompasses companies that ask that their AI not be used to kill people without a human being making the final decision.
The most alarming yet unsurprising element of what followed is how quickly the rest of the industry pandered to Trump. Within hours of the administration’s announcement, OpenAI CEO Sam Altman posted that his company had struck a deal with the Department of Defense to deploy its models on classified networks. Altman claimed that OpenAI’s agreement included the same prohibitions on autonomous weapons and domestic surveillance that Anthropic had demanded—the exact conditions for which Anthropic had been declared a supply chain risk. This contradiction was left unaddressed.
Elon Musk’s xAI, whose Grok system stands to inherit Anthropic’s classified network access, had already agreed to the morally noxious standard of “all lawful purposes.” Musk himself posted that “Anthropic hates Western Civilization.” The immediate beneficiary of Anthropic’s ejection was its direct commercial competitor, operated by a man who is simultaneously one of the administration’s most powerful insiders.
In military and intelligence matters, the stakes are quite high for this sort of crony anticapitalism. A suppl-chain-risk designation is not merely a policy decision. It is a weapon of economic coercion. It means that any company doing business with the Pentagon must certify it has no commercial relationship with Anthropic. The effect is not to remove Anthropic from one government contract. It is to make Anthropic radioactive to any enterprise with government ambitions, which in the technology sector is most enterprises. This could include major universities. Northeastern, Syracuse, Dartmouth, and Howard universities have all announced partnerships with Anthropic to meld its products with teaching and research missions.
Claude was, at the time of its ejection, the only AI model deployed on the Pentagon’s classified networks. It was used, according to reporting by The Wall Street Journal, in the operation to capture Venezuelan President Nicolás Maduro. It could have been used in an operation against Iran (and might be used today, because Claude is embedded in many essential services used by the military including services provided by Palantir). The military and Palantir chose Claude because it had the level of quality control necessary for the most sensitive and demanding tasks.
The systems that will replace Claude in classified environments—Musk’s Grok, and presumably OpenAI’s models under whatever deal Altman has negotiated—will arrive with fewer commitments to autonomous weapons restrictions and will be operated by companies that have demonstrated they will accommodate the administration’s demands. They are also notoriously shoddy products produced by megalomaniacal CEOs. This could endanger the U.S. service people and contractors as well as further endanger Iranian civilians.
What those systems will be used for, under what circumstances, with what human oversight, and subject to what review when something goes wrong—these questions have no public answers, because the administration’s entire posture has been that such questions are none of a private company’s business.
When the government asserts the right to use powerful AI without vendor-imposed constraints, and punishes the vendor that declines to comply, it eliminates one of the only remaining points of friction in the kill chain. The vendors who remain are the ones who said yes. And what they said yes to was, deliberately and explicitly, left undefined. This was not a good situation last week. We should not have to depend on the whims of technology oligarchs to protect lives and our democracy. Sadly, that is the state of American governance in the twenty-first century.
Democracies demand accountability. In many ways, accountability is forgotten in America. It’s been evacuated from our government by the Trump movement, first by removing legal safeguards and the inspectors general who were there to enforce them, and then by the imposition of opaque artificial intelligence systems throughout the bureaucracy, often at the direction of Musk’s boy army, DOGE. Accountability requires, at minimum, that someone be held responsible for consequential decisions. Someone should be punished when things go badly. Also, someone should be honored and rewarded when things go right.
Autonomous weapons are, by design, accountability-dissolving machines. When an algorithm makes a targeting recommendation, and a human approves it in seconds without adequate information (or not at all, as many systems might be human-free), and the AI system that generated the recommendation is governed by a contract that says it can be used for “all lawful purposes,” the chain of accountability does not merely become hard to trace. It becomes nonexistent.
While the Anthropic debacle is a fresh assertion of the autocratic power Trump wields over the private sector, it’s just a corrupt twist in a long plot the militaries of the world have been running for at least a decade.
The war in Ukraine has become a laboratory for artificial intelligence and autonomous weapons. Russian and Ukrainian soldiers, civilians, farmers, and grandmothers are the subjects and victims of the experiment. What is being tested is nothing less than the proposition that machines guided by algorithms can make life-and-death decisions faster, cheaper, and more reliably than humans. We should be deeply unsettled by how enthusiastically this proposition is being embraced, and how little democratic deliberation has accompanied it.
Ukraine and Russia have both deployed what analysts cautiously call “loitering munitions”—drones that can hover over a battlefield, identify targets, and strike, sometimes with minimal human intervention in the final moments of the kill chain. Ukraine’s Brave1 defense tech cluster, established in 2023, has accelerated the integration of artificial intelligence into drone warfare, enabling target recognition systems that draw on machine learning to distinguish combatants from civilians—or, rather, to attempt such distinctions under battlefield conditions that confound even trained human observers.
Russia, for its part, has deployed the Lancet-3, a loitering munition with alleged semi-autonomous targeting capabilities, responsible for the documented destruction of Ukrainian armor, artillery, and infrastructure. The Lancet’s lethal power has rattled NATO planners who spent decades preparing for a different kind of war.
The International Committee of the Red Cross has warned, repeatedly and with increasing alarm, that autonomous weapons systems risk violating international law precisely because the contextual moral judgments required in warfare—proportionality, distinction, precaution—are not reducible to pattern recognition. Yet the pressure of battlefield necessity, compounded by the venture-capital logic now deeply embedded in defense procurement, pushes development forward regardless.
Ukraine’s innovative and genuinely heroic use of drone technology to resist a brutal invasion should not blind us to the longer arc. Every algorithm trained on Ukrainian targeting data, every autonomous engagement protocol normalized by this conflict, becomes a template. Silicon Valley’s defense-tech renaissance—Palantir’s celebrated role in Ukrainian battlefield analytics chief among them—is not driven primarily by solidarity with a besieged democracy. It is driven by contracts, markets, and the opportunity a live war provides to experiment.
While Ukraine has been a vast lab in which civilian casualties have been considered necessary externalities in the conflict, the genocide in Gaza seems like something far different. It is not only a humanitarian catastrophe. It is a demonstration project.
On December 26, 2024, The New York Times published one of the most significant pieces of investigative journalism to emerge from the Gaza war. Reviewing dozens of military records and interviewing more than 100 soldiers and officials, reporters documented how Israel had “severely weakened its system of safeguards meant to protect civilians, adopted flawed methods to find targets and assess the risk of civilian casualties, routinely failed to conduct post-strike reviews of civilian harm or punish officers for wrongdoing, and ignored warnings from within its own ranks and from senior U.S. military officials about these failings.” This was not a rogue operation. It was policy, set at the highest levels soon after the October 7, 2023 attacks on Israeli civilians.
That order—unprecedented in Israeli military history—transformed the rules of engagement within hours of the Hamas attack. Where previous conflicts had permitted strikes only when officers concluded no civilians would be hurt, or occasionally when up to five civilians might be endangered, the new order instantly elevated the acceptable threshold to 20 civilian deaths per strike as a standing baseline. Suddenly, the military could target rank-and-file militants at home, surrounded by families. The definition of a legitimate military target expanded to include lookouts, money changers suspected of handling Hamas funds, and the entrances to tunnel networks typically located inside residential buildings.
A secondary order issued on October 8 went further still, declaring that strikes on military targets could “cumulatively endanger up to 500 civilians each day.” The effect was swift and catastrophic. Israel fired nearly 30,000 munitions into Gaza in the war’s first seven weeks—more than in the next eight months combined.
Since October 2023, the Israeli military has deployed AI systems at a scale that has no precedent in the history of urban warfare. The most extensively documented of these is a system called Lavender, reported in April 2024 by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, based on testimony from multiple Israeli intelligence officers.
Lavender used machine learning to generate a list of tens of thousands of Palestinians flagged as suspected militants—at one point reportedly numbering around 37,000 individuals—and operated with what sources described as a 10 percent error rate that its operators were said to have accepted as tolerable.
The Times investigation substantially confirmed and extended this picture. Intelligence officers, working under intense pressure to propose new targets each day after burning through a pre-war database of vetted targets within the first days of the conflict, turned to automated surveillance systems and AI to triangulate data and locate militants at a pace no human team could manage manually. Israel has long maintained a database listing phone numbers and home addresses of suspected militants. Tapping Gaza’s telecom networks, intelligence officers tracked calls associated with people on the list. But the databases, according to six officers interviewed by the Times, sometimes contained outdated data, increasing the likelihood of misidentifying a civilian as a combatant. And the volume of calls was far too great for manual review. Artificial intelligence was deployed to close the gap.
This is what the military-industrial complex looks like when it has been through the Silicon Valley wash cycle: the same logic of optimization, scale, and throughput that gave us recommendation engines and behavioral ad-targeting now applied to the industrial generation of kill lists. The language is sanitized (“machine-assisted decision-making,” “target generation,” “automated computing systems”), but the function is not. Israeli intelligence officers themselves used starker language, describing their operations as a “mass assassination factory,” according to +972 Magazine.
The Israel Defense Forces have disputed key elements of the +972 reporting, insisting that human commanders retain final authority over strikes and that AI systems function as decision-support tools rather than autonomous executors. The Times investigation added necessary nuance to this claim. Yes, human officers formally approved targets, but when hundreds of AI-generated targets are being processed each day, when verification is inconsistent across units, when a statistical model built on neighborhood cellphone data stands in for genuine surveillance of a specific building, the philosophical category of “meaningful human control” becomes a bureaucratic fiction rather than a genuine safeguard. The fog of war is more a flurry of drones and missiles now.
The legal architecture that was supposed to govern these questions, such as the Convention on Certain Conventional Weapons, has proven entirely inadequate to the speed at which autonomous and semi-autonomous systems have been deployed. States with advanced militaries have systematically blocked binding treaty negotiations on autonomous weapons, preferring voluntary guidelines that impose no enforceable constraints. The United States, which provides Israel with the weapons, the intelligence partnerships, and the diplomatic protection at the United Nations that make the Gaza campaign possible, has been among the most consistent opponents of binding international rules.
What was normalized in Gaza will not stay in Gaza. This is perhaps the most important thing to understand, and the thing that the framing of the current conflict as a local, bounded military operation most dangerously obscures. Every targeting algorithm stress-tested over Gaza’s densely populated streets, every AI system whose performance data is being collected in real time, represents intellectual property and operational knowledge that will flow via export and emulation, through the global defense technology market into other conflicts, other theaters, other cities.
Israel is among the world’s leading exporters of military technology, and its battlefield-proven systems have historically commanded premium prices precisely because they have been tested under live conditions. The Times investigation provides something these systems previously lacked: a detailed, independently documented record of how AI-assisted target generation performs at scale in a major urban war.
There is a concept in ethics called “moral distance.” It refers to the psychological and cognitive space that separates a person who causes harm from the harm itself. Autonomous weapons systems are, among other things, moral distance machines. They allow states to industrialize killing while diffusing responsibility across systems, operators, commanders, procurement officers, software engineers, and shareholders, until no individual person feels or can be made to feel that they bear meaningful accountability for a specific death.
The International Committee of the Red Cross, Amnesty International, Human Rights Watch, and a growing coalition of United Nations special rapporteurs have all called for a halt to the deployment of autonomous weapons systems without enforceable accountability mechanisms. These calls have been met with the polite indifference that powerful states reserve for international norms they find inconvenient.
The question democratic societies must answer—urgently, before the template is fully set—is whether they intend to remain passive consumers of the demonstration, or whether they will demand that the architects of algorithmic warfare be held to the same standards of accountability they claim to believe in when the cameras are pointed somewhere more comfortable.
The deaths of the girls in Southern Iran should haunt us forever. They did not ask to live in a world in which billionaires and their political partners play with remote-control toys for fun and profit. We have major moral questions to ask about how war should be waged going forward. But suddenly, this week, we have an urgent call to voice disgust as well as dissent.






