Pentagon Picks OpenAI Over Anthropic as AI War Powers Debate Reaches a Breaking Point
OpenAI Strikes Classified AI Deal with the Pentagon Hours After Trump Blacklists Anthropic
Background: A Week of Political Turbulence in AI
The week of February 23–28, 2026 proved to be one of the most dramatic in the history of the artificial intelligence industry. At its center: a high-stakes standoff between the U.S. Department of Defense (DoD) — now officially rebranded by the Trump administration as the "Department of War" — and leading AI companies over the terms under which advanced AI models could be deployed in classified military environments.
The Pentagon had been pressing AI firms to allow their models to be used for "all lawful purposes," a broad mandate that set off alarm bells at several frontier labs. Anthropic, the AI safety company behind the Claude model family, drew a firm red line: its technology would not be used for mass domestic surveillance or fully autonomous weapons systems.
Anthropic Blacklisted: A Historic Designation
On Friday, February 28, 2026 (Eastern Time), Defense Secretary Pete Hegseth made an extraordinary move — formally designating Anthropic a "Supply-Chain Risk to National Security." This designation, typically reserved for companies with ties to foreign adversaries, carried severe practical consequences: effective immediately, no contractor, supplier, or partner doing business with the U.S. military would be permitted to conduct any commercial activity with Anthropic.
President Donald Trump also took to social media, calling the company's leadership "Leftwing nut jobs" and directing all federal agencies to cease using Anthropic's products after a six-month phase-out period.
Anthropic responded that it was "deeply saddened" by the decision and announced it intends to challenge the designation in court. The company stated it had not yet received direct communication from the Department of War or the White House regarding the status of negotiations.
OpenAI Moves Quickly: A Deal Is Done
Within hours of Anthropic's blacklisting, OpenAI CEO Sam Altman announced late Friday evening (Eastern Time) that his company had reached an agreement with the Department of War to deploy its AI models inside the Pentagon's classified network.
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network," Altman wrote on X. "In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome."
Altman acknowledged internally at an all-hands meeting that the deal was "definitely rushed" and that "the optics don't look good" — but defended the decision as necessary to de-escalate a dangerous confrontation between the government and the AI industry.
Three Red Lines: What OpenAI Will and Won't Allow
OpenAI published a detailed blog post outlining the terms and protections embedded in its agreement. The company laid out three non-negotiable red lines:
- No mass domestic surveillance — OpenAI technology cannot be used to surveil U.S. persons at scale.
- No autonomous weapons — The models cannot be used to direct weapons systems without human oversight.
- No high-stakes automated decisions — This includes social credit-style systems or equivalent automated judgment mechanisms.
OpenAI claims its agreement goes further than any prior classified AI deployment contract, including Anthropic's original contract, by using a multi-layered enforcement approach: cloud-only deployment (no edge deployment), an internal "safety stack" that the government cannot override, cleared OpenAI personnel embedded in operations, and strong contractual language.
Altman stated that if a model refuses to perform a task, the government has agreed it will not force OpenAI to override that refusal.
Industry Reaction and Scrutiny
The deal immediately drew both praise and criticism. More than 60 OpenAI employees and 300 Google employees had signed an open letter earlier in the week urging their employers to support Anthropic's position.
Critics raised questions about the contract's actual protections. Techdirt's Mike Masnick argued the deal "absolutely does allow for domestic surveillance" based on its reference to Executive Order 12333, which governs intelligence collection outside U.S. borders — but is widely known to capture data on American citizens.
OpenAI's head of national security partnerships, Katrina Mulligan, pushed back: "Deployment architecture matters more than contract language," she wrote on LinkedIn, arguing that cloud-only API deployment ensures models cannot be repurposed for surveillance at the infrastructure level.
Market Signal: Claude Surges Past ChatGPT
In a striking market development, Anthropic's Claude overtook OpenAI's ChatGPT in Apple's App Store rankings on Saturday, March 1, 2026 — a possible sign of public sympathy toward Anthropic's harder stance on AI safety guardrails.
What Happens Next?
OpenAI has publicly called on the Department of War to extend the same contract terms to all AI companies, including Anthropic, in an effort to de-escalate the broader conflict. Altman expressed hope that the deal would serve as a model for responsible government-AI collaboration, rather than a wedge that divides the industry.
Anthropic, for its part, is preparing legal action and continues to push for reinstatement to government contracts. The situation remains fluid as of Monday, March 2, 2026.s