DoD Flags Anthropic as Supply-Chain Risk After Clash Over AI

AI News Hub Editorial
Senior AI Reporter
March 5, 2026
DoD Flags Anthropic as Supply-Chain Risk After Clash Over AI

The U.S. Department of Defense has designated Anthropic and its AI models as a “supply-chain risk,” Bloomberg reported, citing a senior Pentagon official. The decision means companies and agencies working with the Defense Department must certify that they do not use Anthropic’s technology—an unusual step typically reserved for foreign adversaries.

The designation follows weeks of tension between Anthropic and the Pentagon over how the military could use the company’s AI systems. Anthropic CEO Dario Amodei has refused to allow the company’s models to be used for mass surveillance of Americans or for fully autonomous weapons that make targeting or firing decisions without human involvement. Defense officials have argued that the military’s use of AI should not be restricted by a private contractor.

If enforced broadly, the move could disrupt both Anthropic’s business and ongoing military operations. Anthropic has been the only frontier AI company with systems cleared for classified environments, and the U.S. military has been using its Claude models to help process operational data. According to Bloomberg, Claude is also integrated into Palantir’s Maven Smart System, which U.S. operators in the Middle East use to analyze battlefield information, including during the current Iran campaign.

Several critics have described the Pentagon’s decision as unprecedented. Dean Ball, a former AI adviser in the Trump White House, called the designation a sign of government overreach, arguing it treats a domestic technology company more harshly than foreign competitors.

The move has also drawn backlash from within the broader AI industry. Hundreds of employees at OpenAI and Google have urged the Defense Department to reverse the designation and asked Congress to intervene. In their appeal, the employees said they support companies refusing requests to deploy AI systems for domestic mass surveillance or autonomous weapons without human oversight.

The dispute highlights a growing divide over how advanced AI should be used by governments. While Anthropic has resisted certain military applications, OpenAI recently reached an agreement with the Pentagon allowing the Defense Department to use its systems for “all lawful purposes.” Some OpenAI employees have expressed concern that the broad language of that deal could permit the kinds of uses Anthropic has rejected.

Amodei has described the Pentagon’s actions as “retaliatory and punitive.” He has also suggested that his refusal to publicly support or donate to President Trump contributed to the conflict. Meanwhile, OpenAI president Greg Brockman has openly backed Trump and recently donated $25 million to the MAGA Inc. super PAC.

Anthropic has not yet publicly commented on the Pentagon’s designation.

This analysis is based on reporting from TechCrunch.

Image courtesy of digwatch.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: March 5, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 441Reading time: 0 minutes

AI Tools for this Article

Trending Now

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article