President Trump has ordered all federal agencies to stop using AI technology from Anthropic, escalating a dispute over AI safeguards and military use. The decision, announced via Truth Social, underscores the growing tension between tech companies setting ethical AI boundaries and government demands for unrestricted access, particularly within the defense sector.
The Standoff Over AI Usage
The conflict between the Trump administration and Anthropic stems from the Defense Department's demand for unrestricted access to Anthropic's Claude AI model. San Francisco-based Anthropic refused, citing concerns over potential misuse, specifically for mass domestic surveillance and autonomous weapons systems. This refusal triggered a harsh response from the President and the Defense Department.Trump's Directive and Rationale
President Trump publicly denounced Anthropic as "woke" and "leftwing," claiming the company's stance endangers troops and jeopardizes national security He stated that all federal agencies must immediately stop using Anthropic's technology. In a Truth Social post, he added that a six-month phase-out period would be allowed for agencies like the Department of Defense heavily reliant on Anthropic's products [3].Pentagon's Response
Defense Secretary Pete Hegseth took a firm stance, threatening to revoke Anthropic's $200 million contract with the U.S. military He also raised the possibility of designating Anthropic as a "supply-chain risk". This designation would prevent companies that do business with the Pentagon from using Anthropic's technology, placing it in a category typically reserved for foreign adversaries like China and Russia [5]. Hegseth also suggested invoking the Defense Production Act to compel Anthropic to provide an unrestricted version of Claude This act allows the government to prioritize national defense needs during emergencies.Anthropic's Position
Despite the pressure, Anthropic CEO Dario Amodei stood firm. "These threats do not change our position: we cannot in good conscience accede to their request," Amodei stated in a letter. Anthropic's refusal underscores a growing concern among AI developers about the ethical implications of their technology, particularly regarding surveillance and autonomous weapons.