OpenAI is revising its newly announced agreement to provide artificial intelligence systems to the US Department of War (DoW) after CEO Sam Altman acknowledged the deal was rolled out too quickly and appeared “opportunistic and sloppy.”
In a message to employees reposted on X on Monday night, Altman said the company should not have rushed to announce the contract on Friday, shortly after the Pentagon dropped its previous AI contractor, Anthropic. He said OpenAI will now explicitly prohibit its technology from being used for domestic mass surveillance or by defense intelligence agencies such as the National Security Agency (NSA).
The original agreement triggered backlash almost immediately. Critics raised concerns that OpenAI’s models could be used in surveillance programs, drawing comparisons to the 2013 Snowden revelations about the NSA’s mass data collection. Despite OpenAI’s initial assurances that the deal included “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s,” the company faced skepticism online and inside the tech industry.
On social platforms including X and Reddit, users called for a “delete ChatGPT” campaign. According to Sensor Tower data cited in reports, Claude — the chatbot made by Anthropic — climbed to the top of Apple’s App Store rankings, overtaking ChatGPT.
Altman said the company had been trying to “de-escalate things and avoid a much worse outcome,” but admitted the speed of the announcement created confusion. “The issues are super complex, and demand clear communication,” he wrote.
The controversy unfolded against a tense backdrop. Anthropic had previously resisted Pentagon demands to allow unrestricted use of its AI systems, arguing that mass domestic surveillance was “incompatible with democratic values.” President Donald Trump responded by calling Anthropic “leftwing nut jobs” and directing federal agencies to phase out the company’s technology. The Department of War designated Anthropic a supply chain risk, prompting additional cabinet-level agencies — including the departments of state, Treasury and health and human services — to cease using its products.
OpenAI’s agreement with the DoW came soon after Anthropic was dropped. The company said in a blog post that its “red lines” include barring use of its technology to direct autonomous weapons systems. Still, nearly 900 employees at OpenAI and Google signed an open letter urging their employers to refuse any use of AI models for domestic surveillance or autonomous killing without human oversight. The letter included 796 Google employees and 98 OpenAI staff.
Observers have questioned how OpenAI secured a deal that Anthropic deemed unacceptable. Miles Brundage, OpenAI’s former head of policy research, wrote on X that employees’ “default assumption” should be that OpenAI “caved + framed it as not caving,” though he acknowledged the company is complex and that some involved believed they achieved a fair outcome. He added separately that he would “rather go to jail” than comply with an unconstitutional order.
Altman’s decision to amend the contract marks a notable public recalibration for a company that has positioned itself as balancing commercial growth with safety guardrails. With more than 900 million ChatGPT users globally, OpenAI’s defense partnerships carry both financial weight and reputational risk.
The episode underscores the growing tension between AI companies pursuing government contracts and internal and public pressure over how those systems are deployed — particularly when questions of surveillance and military use are involved.
This analysis is based on reporting from the Guardian.
Image courtesy of OpenAI.
This article was generated with AI assistance and reviewed for accuracy and quality.