Sam Altman Strikes Pentagon AI Deal With Safety Guardrails as Trump Targets Anthropic in Feud
OpenAI deploys AI in US military networks with safety conditions barring surveillance and autonomous weapons.
OpenAI CEO Sam Altman announced that the company has reached a deal with the US Department of War (DoW) to deploy its AI models within the department’s classified network. A key element of the agreement is a prohibition on domestic mass surveillance, one of OpenAI’s core safety principles, along with maintaining human responsibility for the use of force, including autonomous weapon systems. Altman emphasized that the DoW has incorporated these principles into its policies and laws, reflecting a commitment to safety and responsible deployment. Technical safeguards will also be implemented, including Full Disc Encryption (FDE) and deployment exclusively on cloud networks, to ensure model behaviour aligns with safety standards. OpenAI is urging that these same terms be applied across the AI industry to promote standardised safety and ethical deployment practices.
The announcement comes amid tensions between the Trump administration and Anthropic, after the AI startup refused to grant unconditional military use of its Claude models. President Donald Trump directed all federal agencies to immediately stop using Anthropic’s technology, criticising the company as a “woke” firm and threatening legal and criminal consequences if it does not comply. Anthropic has insisted that its AI should not be used for mass surveillance or fully autonomous weapons and plans to challenge the government’s actions legally. Altman, while not directly naming Anthropic, stressed that OpenAI prefers de-escalation through reasonable agreements rather than legal confrontations.
Under the OpenAI-DoW deal, the AI models will operate under controlled deployment with strict technical safeguards, ensuring both policy and operational compliance. Altman stated that the DoW demonstrated respect for AI safety during negotiations and that additional measures will ensure models behave as intended. These safeguards, combined with legal and policy alignment, are designed to prevent misuse and protect citizens’ privacy while enabling classified applications. OpenAI aims to serve humanity responsibly, maintaining transparency and safety as guiding principles in military collaborations.
Also Read: Sam Altman Demands Urgent Global Regulation of AI Now
The agreement highlights OpenAI’s insistence on embedding its ethical standards into AI deployment contracts, with clear prohibitions on domestic surveillance and autonomous weapon decision-making. Altman reaffirmed the company’s mission of responsible AI use, emphasising the importance of safety and broad distribution of AI benefits. By formalising these rules with the DoW, OpenAI seeks to balance national security applications with public safety and ethical considerations, setting a precedent for future AI-government collaborations.
Amid the ongoing anthropic dispute, the Trump administration’s actions illustrate the high stakes of AI deployment in sensitive government operations. OpenAI’s approach contrasts with Anthropic’s stance, focusing on negotiated terms and safeguards rather than outright rejection or confrontation. By promoting industry-wide standardisation of safety measures, Altman is signalling a broader effort to ensure responsible AI use across the defence sector. The deal underscores the intersection of AI innovation, government oversight, and ethical responsibility in a rapidly evolving technology landscape.
Also Read: OpenAI's Sam Altman Declares Democratisation of AI the Only Fair and Safe Future