This legal battle began after weeks of intense back-and-forth negotiations. In late February, news broke that Defense Secretary Pete Hegseth was pressuring Anthropic to remove specific safeguards from its AI systems. However, Dario Amodei, Anthropic's CEO, made it clear the company would not permit its models to be used for mass surveillance or the development of autonomous weapons. The company stood firm on its "red lines" regarding these ethical concerns by a February 27 deadline.
In response to Anthropic's refusal, Hegseth threatened the "supply chain risk" designation and stated the U.S. government would cancel its existing $200 million contract with the company, per Gizmodo. Former President Donald Trump then ordered all federal agencies to cease using Anthropic's tools, calling the company's leadership "left wing nut jobs," according to BBC. This designation effectively blacklists Anthropic from obtaining U.S. government contracts and prohibits other defense contractors from using its tools.
You might be thinking: why fight so hard against a powerful client like the Pentagon? For Anthropic, the fight centers on foundational principles. Anthropic's lawsuit asserts that the Constitution "does not allow the government to wield its enormous power to punish a company for its protected speech" and that "no federal statute authorizes the actions taken here." This isn't just a contract dispute; it's a direct challenge to the government's authority to dictate ethical boundaries for private AI development.
Meanwhile, competitor OpenAI stepped into the void, quickly finalizing a deal with the Department of Defense. Interestingly, OpenAI CEO Sam Altman publicly emphasized similar safety principles regarding prohibitions on domestic mass surveillance and human responsibility for autonomous weapons. OpenAI’s contract specifically states, "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." This apparent alignment, however, didn't prevent internal dissent.
Caitlin Kalinowski, OpenAI’s head of robotics hardware, resigned shortly after the DoD deal, stating on X (formerly Twitter) that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." This highlights the deep ethical divisions within the AI community, even among companies ostensibly sharing similar safety principles. The legal brief filed by Anthropic also received support from multiple employees at OpenAI and Google, who view Anthropic's redlines as critical technical constraints in the absence of comprehensive AI legal frameworks, reports GIGAZINE.







