Anthropic Takes the U.S. Government to Court: Inside the Unprecedented AI Safety Showdown That Could Reshape the Tech Industry
News Summary
San Francisco, CA — Monday, March 9, 2026 (EDT)
Anthropic, one of the world's leading artificial intelligence companies and the maker of the Claude AI model, filed two federal lawsuits on Monday against the Trump administration, the Department of Defense, and more than a dozen other federal agencies. The legal action challenges the government's unprecedented decision to label the San Francisco-based AI company a "supply chain risk to national security" — a designation historically reserved for firms tied to foreign adversaries.
The Breaking Point: A Two-Week Standoff
The conflict between Anthropic and the Pentagon escalated rapidly over the past two weeks. At the center of the dispute is CEO Dario Amodei's refusal to allow the company's Claude AI model to be used without restrictions — specifically for fully autonomous weapons systems and the mass domestic surveillance of American citizens. Amodei met with Defense Secretary Pete Hegseth in February in hopes of brokering a deal, but negotiations broke down publicly.
On February 27, President Trump posted on social media directing all federal agencies to "IMMEDIATELY CEASE all use of Anthropic's technology," branding the company "A RADICAL LEFT, WOKE COMPANY." That same day, Secretary Hegseth announced that Anthropic would be officially designated a supply chain risk, further declaring that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
The formal designation was officially confirmed on March 4, 2026 (EST), making Anthropic the first U.S. company in history to receive this classification.
The Lawsuits: Two Fronts, One Message
Anthropic filed two separate lawsuits — one in the U.S. District Court for the Northern District of California, and another in the U.S. Circuit Court of Appeals for Washington, D.C. — each targeting different aspects of the government's actions. The 48-page complaint describes the administration's moves as "unprecedented and unlawful."
The filings make three central legal arguments. First, that the federal government retaliated against Anthropic for exercising First Amendment-protected speech on matters of public significance — specifically, its stated positions on AI safety. Second, that President Trump exceeded his authority in ordering all federal agencies to stop using Anthropic's technology. And third, that Anthropic was denied adequate due process before being subjected to the supply chain risk designation.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit states. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."
The Financial Stakes
The economic consequences of the blacklist are already being felt. Anthropic warned in its filing that "contracts with the federal government are already being canceled," and that "hundreds of millions of dollars" in near-term revenue are at risk. The company is projected to generate approximately $14 billion in revenue in 2026, with more than 500 customers paying at least $1 million annually for Claude. Its most recent valuation stands at $380 billion.
Beyond government contracts, the company is concerned that the supply chain risk designation — even if narrowly scoped to Pentagon-related work — is casting doubt over its broader commercial relationships.
Industry Reaction: Rivals Show Support
In an unusual display of solidarity across competing AI labs, dozens of scientists and researchers from OpenAI and Google DeepMind filed an amicus brief in their personal capacities on Monday supporting Anthropic. The group argued that the supply chain risk designation could damage U.S. competitiveness in AI and suppress important public debate about the responsible development of the technology.
Notably, OpenAI — arguably Anthropic's biggest rival — struck its own deal with the Pentagon just hours after the government punished Anthropic. However, OpenAI also publicly stated that it disagreed with designating Anthropic a supply chain risk, saying: "A good future is going to require real and deep collaboration between the government and the AI labs."
The Pentagon's Position
Defense Department officials maintained that private companies cannot dictate how the U.S. government deploys technology in warfare and tactical operations. The Pentagon insisted on having full flexibility to use AI for "any lawful use," arguing that Anthropic's restrictions could endanger American lives on the battlefield. In response to the lawsuit, a DoD spokesperson stated simply that the agency does not comment on matters in litigation.
White House spokesperson Liz Huston said in a statement: "The President and Secretary of War are ensuring America's courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders."
Claude Still in Use — Even During the Iran Conflict
In a striking irony, CBS News and CNBC reported that the Pentagon has continued to use Claude during the ongoing U.S. and Israeli military operations involving Iran — even after the formal blacklist was put in place. The Wall Street Journal also reported that Claude had previously been used in military operations, including in intelligence assessments and identifying targets in Iran, and in the operation that led to the arrest of Venezuelan leader Nicolás Maduro.
Anthropic acknowledged its partnership with national security contractors, including Palantir, for data processing, trend identification, and supporting government decision-making — work it says falls within ethical boundaries.
What Happens Next
Anthropic is seeking injunctive relief to block Hegseth's supply chain risk order and have it declared "arbitrary, capricious, an abuse of discretion, and contrary to law." The company also stated that its decision to seek judicial review does not foreclose further dialogue with the government.
"Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security," an Anthropic spokesperson said, "but this is a necessary step to protect our business, our customers, and our partners. We will continue to pursue every path toward resolution, including dialogue with the government."
The outcome of this case will likely set a major precedent not just for Anthropic, but for the entire AI industry — determining how much power the government holds to compel technology companies to remove safety guardrails, and whether corporate ethical commitments on AI can coexist with national security demands.