OpenAI CEO Sam Altman said late Friday that the company has reached an agreement allowing the Department of Defense to deploy its AI models inside classified military networks, a deal that comes amid a high-profile dispute between the Pentagon and rival AI lab Anthropic.
Altman said the agreement includes explicit safeguards barring the use of OpenAI technology for mass domestic surveillance, directing autonomous weapons systems, or high-stakes automated decision-making such as “social credit” systems. He added that those principles are reflected in both the contract and the deployment architecture.
Under the agreement, OpenAI will deploy its models in a cloud-only configuration, retaining full control over its “safety stack.” The company said it will not provide “guardrails off” or non-safety-trained models, and it will not deploy models on edge devices — a setup it argues would reduce the risk of powering fully autonomous lethal systems. Cleared OpenAI engineers and safety researchers will also be embedded with the Department of Defense to help oversee usage.
Altman said OpenAI will build technical safeguards to ensure the models “behave as they should,” and that the Department of War agreed to those principles in law and policy and incorporated them into the contract.
The contract language states that the AI system may be used for “all lawful purposes,” consistent with law and operational requirements, but it cannot independently direct autonomous weapons in cases where human control is required by law, regulation, or department policy. It also restricts the use of AI for intelligence activities involving private information to comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, Executive Order 12333, and other relevant directives. The system cannot be used for unconstrained monitoring of U.S. persons’ private information or for domestic law enforcement beyond what is permitted under the Posse Comitatus Act and other applicable law.
The agreement follows a tense standoff between the Pentagon and Anthropic. The Defense Department had pushed AI companies to allow their models to be used for “all lawful purposes.” Anthropic sought to maintain red lines around fully autonomous weapons and mass domestic surveillance, arguing in a statement that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
After Anthropic and the Pentagon failed to reach a deal, President Donald Trump criticized the company on social media and directed federal agencies to phase out its products within six months. Defense Secretary Pete Hegseth said Anthropic was being designated a supply-chain risk and barred contractors that work with the U.S. military from commercial activity with the company. Anthropic said it had not received direct communication from the Department of War or the White House regarding the status of negotiations and said it would challenge any supply-chain designation in court.
Altman said OpenAI had asked the Pentagon to make the same terms available to all AI labs and that the company does not believe Anthropic should be designated a supply-chain risk. He also told employees at an all-hands meeting that the government would allow OpenAI to build its own safety stack and that if a model refused a task, the government would not force the company to override it.
The deal marks a significant expansion of OpenAI’s role in U.S. defense operations. While the company previously hesitated to enter a classified deployment, it said it worked to ensure safeguards were ready before moving forward. “We were—and remain—unwilling to remove key technical safeguards to enhance performance on national security work,” the company said in its announcement.
The agreement positions OpenAI as the first major lab to publicly confirm a classified deployment under a contract that explicitly codifies limits on surveillance and autonomous weapons use. Whether those terms will become a standard framework for other AI companies working with the Pentagon remains an open question, but OpenAI is urging the Department of War to extend similar conditions across the industry.
This analysis is based on reporting from TechCrunch.
Image courtesy of OpenAI.
This article was generated with AI assistance and reviewed for accuracy and quality.