OpenAI has reached an agreement with the Department of Defense (DoD) to integrate its AI models into the agency’s classified networks, while adhering to specific safety principles (mlq.ai)(cnbc.com). This deal surfaced shortly after the DoD severed ties with Anthropic, citing supply-chain risks (businessinsider.com)(bloomberg.com). The agreement incorporates prohibitions against domestic mass surveillance and emphasizes human responsibility in the use of force, including autonomous weapons (mlq.ai)(bloomberg.com). OpenAI will also implement technical safeguards to ensure its models function as intended.
The Department of Defense's decision to collaborate with OpenAI comes after the agency encountered disagreements with Anthropic regarding restrictions on AI usage. The DoD reportedly sought unrestricted access to AI models for lawful purposes but encountered resistance from Anthropic, specifically concerning the use of their Claude model for mass surveillance or autonomous weapons (mlq.ai)(thenextweb.com). Defense Secretary Pete Hegseth accused Anthropic of attempting to control operational military decisions (thenextweb.com). In contrast, OpenAI has seemingly agreed to terms that align with the DoD's requirements while incorporating ethical safeguards.
OpenAI CEO Sam Altman stated that the agreement with the Department of Defense includes the company's principles of prohibiting domestic mass surveillance and ensuring human responsibility for the use of force (bloomberg.com). He noted that OpenAI is asking the government to offer the same terms to all AI companies it collaborates with. According to Jeremy Lewin, Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, contracts with OpenAI and xAI include "certain existing legal authorities and includes certain mutually agreed upon safety mechanisms."
OpenAI is implementing several safeguards to ensure the ethical and safe deployment of its AI models within the Department of Defense. These include technical safeguards to ensure the models behave as intended (bloomberg.com). OpenAI engineers will work with the agency to ensure model safety and will deploy the models on cloud networks (bloomberg.com). OpenAI is not yet on Amazon cloud, which the government uses, but a recent partnership with Amazon Web Services (AWS) could change this (bloomberg.com).
These safeguards reflect OpenAI's commitment to responsible AI development and deployment. The agreement explicitly prohibits domestic mass surveillance and mandates human responsibility for the use of force, including autonomous weapon systems (mlq.ai)(bloomberg.com). This ensures that AI is used ethically and responsibly within the defense sector.
Anthropic, which began working with the U.S. government in 2024, has strongly resisted pressure to remove guardrails from its AI models (thenextweb.com). The company reiterated its stance, stating that "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons" (thenextweb.com). Anthropic also stated it would challenge any supply chain risk designation in court (businessinsider.com).
The dispute reportedly stemmed from a contract worth up to $200 million, where Anthropic refused to drop restrictions on its Claude model (mlq.ai). Anthropic's firm stance highlights the ongoing debate about the ethical boundaries of AI development and deployment, particularly in sensitive sectors like defense.
The Department of Defense's deals with OpenAI and xAI, along with its conflict with Anthropic, set a precedent for future collaborations between the government and AI companies. Altman stated that OpenAI is requesting that the government offer the same terms to all AI companies it works with (cnbc.com). This suggests a move towards standardized ethical guidelines and safety mechanisms in government AI contracts.
Other AI companies, including Google and xAI, have also secured agreements or approvals for the use of their AI models by the Department of Defense, including in classified environments (thenextweb.com). The DoD is clearly seeking to leverage the capabilities of AI across various applications while also addressing potential risks.
OpenAI has partnered with the Department of Defense to integrate its AI models into the agency's classified networks. The agreement includes specific safety principles, such as prohibitions against domestic mass surveillance and mandates for human oversight in the use of force, including autonomous weapons. OpenAI will also implement technical safeguards to ensure its models function as intended.
The Department of Defense chose OpenAI after disagreements with Anthropic regarding restrictions on AI usage. The DoD sought unrestricted access to AI models for lawful purposes, but Anthropic resisted, particularly concerning the use of their Claude model for mass surveillance or autonomous weapons. OpenAI seemingly agreed to terms that align with the DoD's requirements while incorporating ethical safeguards.
The agreement between OpenAI and the Department of Defense includes several safeguards to ensure ethical and safe AI deployment. These safeguards include technical measures to ensure the models behave as intended, a prohibition against domestic mass surveillance, and a mandate for human responsibility in the use of force, including autonomous weapon systems. OpenAI engineers will also work with the agency to ensure model safety.
Anthropic strongly resisted pressure to remove guardrails from its AI models. The company stated that it would not change its position on mass domestic surveillance or fully autonomous weapons, regardless of potential repercussions from the Department of Defense. Anthropic also indicated it would challenge any supply chain risk designation in court.
More insights on trending topics and technology







