Hundreds of Tech Workers Urge Pentagon, Congress to Reverse Anthropic ‘Supply Chain Risk’ Label

AI News Hub Editorial
Senior AI Reporter
March 2nd, 2026
Hundreds of Tech Workers Urge Pentagon, Congress to Reverse Anthropic ‘Supply Chain Risk’ Label

Hundreds of tech workers have signed an open letter urging the Department of Defense to withdraw its designation of Anthropic as a “supply chain risk,” arguing the move sets a dangerous precedent for how the federal government treats American technology companies.

The letter, which includes employees from companies such as OpenAI, Slack, IBM, Cursor, and Salesforce Ventures, also calls on Congress to examine “whether the use of these extraordinary authorities against an American technology company is appropriate.” It comes after Anthropic refused to grant the Pentagon unrestricted access to its AI systems, triggering a high-profile clash with Defense Secretary Pete Hegseth and President Donald Trump.

Anthropic’s stance centered on two red lines: it did not want its models used for mass domestic surveillance of Americans or to power fully autonomous weapons capable of making targeting and firing decisions without a human in the loop. The Department of Defense said it had no plans to use the technology in those ways but maintained it should not be bound by vendor-imposed restrictions.

After Anthropic CEO Dario Amodei declined to change those conditions, Trump directed federal agencies to phase out the company’s products over six months. Hegseth said Anthropic would be designated a supply chain risk — a label typically reserved for foreign adversaries — and wrote on X that, effective immediately, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

However, a social media post alone does not formalize that designation. The government must complete a risk assessment and notify Congress before military contractors would be required to sever ties. Anthropic has said the designation is “legally unsound” and pledged to challenge it in court if finalized.

Many signatories of the letter view the administration’s response as retaliatory. “When two parties cannot agree on terms, the normal course is to part ways and work with a competitor,” the letter states. “Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation.”

The dispute has also reignited debate within the AI industry about government use of advanced models. Boaz Barak, an OpenAI researcher, said in a social media post that preventing governments from using AI for mass surveillance is his “personal red line” and “it should be all of ours.” He argued that the industry should treat the risk of government abuse with the same seriousness it applies to bioweapons or cybersecurity threats.

The controversy unfolded just as OpenAI announced it had reached its own agreement to deploy models within the Department of Defense’s classified environments. OpenAI CEO Sam Altman said his company shares the same red lines around mass domestic surveillance and autonomous weapons use.

For many in the industry, the core issue extends beyond Anthropic’s contract. The open letter frames the supply chain risk threat as an extraordinary measure that could reshape how AI firms negotiate with the federal government. If upheld, the designation would effectively blacklist Anthropic from working with the Pentagon and any contractor tied to it — a sweeping consequence stemming from a contract dispute over model safeguards.

Whether the Department of Defense proceeds with the formal designation will determine not only Anthropic’s future government business but also how other AI companies weigh their own red lines when dealing with Washington.

This analysis is based on reporting from TechCrunch.

Image courtesy of Unsplash.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: March 2nd, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 589Reading time: 0 minutesLast fact-check: March 2nd, 2026

AI Tools for this Article

Trending Now

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article