On Thursday, Anthropic CEO Dario Amodei said the company “cannot in good conscience” agree to let the Department of Defense use its AI models for all lawful purposes without limitation, setting up a high-stakes standoff with the Pentagon over how its technology can be deployed. The dispute centers on Anthropic’s request for assurances that its models will not be used for fully autonomous weapons or mass domestic surveillance, conditions the DOD has so far declined to accept.
The comments come after weeks of tense negotiations between the AI startup and the Pentagon. Defense Secretary Pete Hegseth met with Amodei at the Pentagon on Tuesday and gave the company until Friday evening to agree to the department’s terms. According to a senior Pentagon official, the DOD sent Anthropic its “last and final offer” on Wednesday night.
Hegseth has threatened to designate Anthropic a “supply chain risk” or invoke the Defense Production Act to compel compliance if it refuses. Chief Pentagon spokesman Sean Parnell said Thursday that the department has “no interest” in using Anthropic’s models for fully autonomous weapons or to conduct mass surveillance of Americans, noting that such surveillance would be illegal. He emphasized that the military simply wants to use the company’s models for “all lawful purposes” and warned that it “will not let ANY company dictate the terms regarding how we make operational decisions.”
Anthropic, for its part, argues that two specific guardrails are necessary. Amodei said the company hopes to continue serving the department and U.S. armed forces — but only with those safeguards in place. “It is the Department’s prerogative to select contractors most aligned with their vision,” he wrote. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
The clash is particularly notable given Anthropic’s existing relationship with the military. The company signed a $200 million contract with the DOD in July and became the first AI lab to integrate its models into mission workflows on classified networks. Its rivals — OpenAI, Google and xAI — also received contract awards of up to $200 million last year and have agreed to allow the department to use their models for all lawful purposes on unclassified systems. This week, xAI also agreed to allow its models to be used in classified settings.
Anthropic’s position signals a willingness to risk that defense business rather than drop its restrictions. Amodei said that if the department chooses to offboard the company, Anthropic would work to ensure a smooth transition to another provider to avoid disrupting military operations or planning.
At issue is not whether the military can use advanced AI tools — it already does — but under what conditions. The Pentagon says it needs flexibility to deploy models across lawful use cases without carve-outs. Anthropic wants explicit limits on two categories it views as especially sensitive: fully autonomous weapons and large-scale domestic surveillance.
For now, both sides are holding their ground. The DOD maintains that its request is “common-sense” and necessary to avoid jeopardizing operations. Anthropic says it cannot agree to blanket authorization without compromising its principles. With a Friday deadline looming, the outcome will determine whether the startup remains a key AI provider to the U.S. military — or becomes the first major lab to walk away from a significant defense partnership over deployment boundaries.
This analysis is based on reporting from CNBC.
Image courtesy of PFPA.
This article was generated with AI assistance and reviewed for accuracy and quality.