Blacklisted But Still Running: How Claude AI Powered America's Iran Strike While Being Banned by the Pentagon

March 07, 2026
Claude
4 min

News Summary

In one of the most consequential demonstrations of artificial intelligence in modern warfare, Anthropic's Claude AI model was deployed by U.S. Central Command (CENTCOM) during the joint U.S.-Israeli military campaign against Iran — code-named "Roaring Lion" and "Operation Epic Fury" — even as the White House formally banned its use hours earlier.

What Claude Did on the Battlefield

According to reporting from The Wall Street Journal, Axios, and CBS News, the U.S. military used Claude for three core functions: intelligence assessment, target identification, and battle scenario simulation. In the first 24 hours of strikes beginning Saturday, February 28, 2026 (ET), the U.S. hit approximately 1,000 Iranian targets — a tempo that military analysts say would have been operationally impossible without AI-assisted planning.

Claude, integrated into a classified targeting platform developed by Palantir and hosted on Amazon Web Services' Top Secret Cloud, processed massive volumes of incoming data — drone footage, radio intercepts, satellite imagery, and human intelligence — to locate, prioritize, and cross-reference high-value targets including Iranian military assets, leadership compounds, and strategic infrastructure. According to reports, the system even conducted preliminary legal reviews to assess compliance with international law before recommending strikes.

Among the operation's reported outcomes was the killing of Iran's Supreme Leader, Ayatollah Ali Khamenei, whose location was confirmed via CIA intelligence described as offering "high fidelity" positional data that Israel used alongside its own preparations to execute a strike targeting a rare gathering of senior Iranian officials at a government compound in central Tehran.

The Paradox: Banned but Still Running

The deployment unfolded under extraordinary political circumstances. On Friday, February 27, 2026 (ET), U.S. Secretary of Defense Pete Hegseth ordered that Anthropic be designated a "Supply-Chain Risk to National Security," effectively banning all contractors, suppliers, and Pentagon partners from conducting commercial activity with the company. The dispute centered on Anthropic's refusal to strip ethical constraints from Claude — specifically, guardrails that prevent the model from supporting fully autonomous lethal weapons systems or enabling mass domestic surveillance of U.S. citizens.

Despite the ban, Claude's integration into classified military systems was so deep that the Pentagon could not disentangle it in time. Multiple defense sources told CBS News and Defense One that replacing Claude's capabilities would take at least three months.

"The Pentagon demanded broader rights to use Claude 'for all lawful purposes,'" sources familiar with the contract dispute explained, but Anthropic declined. The result was a remarkable paradox: an AI tool officially blacklisted by the U.S. government was simultaneously powering one of the most complex military operations of the 21st century.

Claude's Role: Decision-Support, Not Autonomy

Reports across multiple outlets were careful to clarify that Claude did not autonomously control weapons systems or make lethal decisions independently. Rather, it functioned as a high-speed decision-support tool — synthesizing intelligence, modeling outcomes, simulating strike sequences, and presenting recommendations to human operators who retained final authority. Professor Craig Jones of Newcastle University noted that AI-generated recommendations now arrive "faster than the human mind can" process alternative options, compressing the traditional "kill chain" from days or weeks down to seconds.

Claude was also reported to have played a role in the January 2026 operation that led to the capture of Venezuelan President Nicolás Maduro, signaling that its military integration predates the Iran campaign.

Anthropic's Position

In a statement released Saturday, March 1, 2026 (ET), Anthropic said it had made clear that it "support[s] all lawful uses of AI for national security." The company did not publicly confirm or deny its technology's involvement in the Iran strikes. Anthropic's CEO Dario Amodei had addressed the AI Impact Summit in New Delhi as recently as February 19, 2026 (IST), where discussions of AI governance and safety were prominent.

Analysts now debate whether the U.S. government will reverse its "supply chain risk" designation — given that the military cannot currently replace Claude — or double down, opening a contract to a less principled rival. As one commentator framed it, crushing one of the nation's most safety-conscious AI labs for refusing to remove ethical limits would be, in game-theory terms, "pure stupidity."

The Dawn of AI-Driven Warfare

The Iran strikes have become the world's first large-scale public demonstration of AI-driven warfare. Claude's role — however contested, however paradoxical — marks an inflection point. AI is no longer a future capability being evaluated for the battlefield. It is already there, shaping the pace, precision, and politics of modern conflict.