President Donald Trump has ordered all US authorities businesses to cease utilizing Claude and different Anthropic providers, escalating an already unstable feud between the Division of Protection and firm over AI safeguards. Taking to Reality Social on Friday afternoon, the president mentioned there can be a six-month section out interval for federal businesses, together with the Protection Division, emigrate off of Anthropic’s merchandise.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE making an attempt to STRONG-ARM the Division of Conflict, and power them to obey their Phrases of Service as an alternative of our Structure,” the president wrote. “Anthropic higher get their act collectively, and be useful throughout this section out interval, or I’ll use the Full Energy of the Presidency to make them comply, with main civil and legal penalties to observe.”
Earlier than at present, US Protection Secretary Pete Hegseth had threatened to label Anthropic a “provide chain danger” if it didn’t comply with withdraw safeguards that insist Claude not be used for mass surveillance in opposition to People or in totally autonomous weapons. In a put up on X printed after President Trump’s assertion, Hegseth mentioned he was “directing the Division of Conflict to designate Anthropic a Provide-Chain Danger to Nationwide Safety. Efficient instantly, no contractor, provider, or companion that does enterprise with america army might conduct any business exercise with Anthropic.”
Anthropic didn’t instantly reply to Engadget’s remark request. Earlier within the day, a spokesperson for the corporate mentioned the contract Anthropic obtained after CEO Dario Amodei outlined Anthropic’s place made “nearly no progress” on stopping the outlined misuses.
“New language framed as a compromise was paired with legalese that may permit these safeguards to be disregarded at will. Regardless of DOW’s latest public statements, these slender safeguards have been the crux of our negotiations for months,” the spokesperson mentioned. “We stay able to proceed talks and dedicated to operational continuity for the Division and America’s warfighters.”
Advocacy teams just like the Middle for Democracy and Know-how (CDT) rapidly got here out in opposition to the president’s threats. “This motion units a harmful precedent. It chills personal firms’ means to have interaction frankly with the federal government about acceptable makes use of of their expertise, which is particularly necessary in nationwide safety settings that so usually have lowered public visibility,” mentioned CDT President and CEO Alexandra Givens, in an announcement shared with Engadget. “These threats undermine the integrity of the innovation ecosystem, distort market incentives and normalize an expansive view of government energy that ought to fear People all throughout the political spectrum.”
For now, it seems the AI business is united behind Anthropic. On Friday, a whole lot of Google and OpenAI workers signed an open letter urging their firms to face in “solidarity” with the lab. In keeping with an inner memo seen by Axios, OpenAI CEO Sam Altman mentioned the ChatGPT maker would draw the identical crimson line as Anthropic.
In a weblog put up printed late on Friday, Anthropic vowed to “problem any provide chain danger designation in court docket,” and guaranteed its prospects that solely work associated to the Protection Division can be affected. The corporate’s full assertion is obtainable right here, an excerpt is under:
Designating Anthropic as a provide chain danger can be an unprecedented motion—one traditionally reserved for US adversaries, by no means earlier than publicly utilized to an American firm. We’re deeply saddened by these developments. As the primary frontier AI firm to deploy fashions within the US authorities’s labeled networks, Anthropic has supported American warfighters since June 2024 and has each intention of continuous to take action.
We imagine this designation would each be legally unsound and set a harmful precedent for any American firm that negotiates with the federal government.
No quantity of intimidation or punishment from the Division of Conflict will change our place on mass home surveillance or totally autonomous weapons. We are going to problem any provide chain danger designation in court docket.
Replace, February 27, 9PM ET: This story was up to date twice after publish. First at 6PM ET to incorporate a hyperlink to and quotes from Hegseth in regards to the designation of Anthropic as a provide chain danger. Later, a quote from Anthropic was added, together with a hyperlink to the corporate’s weblog put up on the topic.

