OpenAI-Anthropic-Rivalry

WASHINGTON (AP) — The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic’s artificial intelligence technology and imposed other major penalties, culminating an unusually public clash between the government and the company over AI safeguards.

Defense Secretary Pete Hegseth said he was designating Anthropic as a supply chain risk, a move that could prevent U.S. military vendors from working with the company.

Hegseth’s remarks, delivered in a social media post, came shortly after the Pentagon’s deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences — and nearly 24 hours after CEO Dario Amodei said his company “cannot in good conscience accede” to the Defense Department’s demands.

Calling the company “Leftwing nut jobs,” President Donald Trump said Anthropic made a mistake trying to strong-arm the Pentagon. Trump wrote on Truth Social that most agencies must immediately stop using Anthropic's AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.

At issue in the defense contract was a clash over AI’s role in national security. Anthropic had said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”

Anthropic, maker of the chatbot Claude, could afford to lose the contract. But an ultimatum this week from Hegseth posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups. Military officials had warned Anthropic earlier in the week they could deem it "a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.

Trump also said Anthropic could face “major civil and criminal consequences” if it’s not helpful in the phase-out period. Anthropic didn’t immediately reply to a request for comment on the new developments.

The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic and slam their reluctance to acquiesce to the administration’s demands.

Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said the penalties on Anthropic “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”

The dispute stunned AI developers in Silicon Valley, where a growing number of workers from Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand in open letters and other forums.

The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that also have contracts to supply their AI tools to the military.

Musk sided with Trump’s administration on Friday, saying on his social media platform X that “Anthropic hates Western Civilization.”

But one of Amodei’s fiercest rivals, OpenAI CEO Sam Altman, sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview, suggesting that OpenAI and most of the AI field share the same red lines. Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.

“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC.

Retired Air Force Gen. Jack Shanahan wrote on a social media that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”

Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.

“They’re not trying to play cute here,” he wrote Thursday on LinkedIn.


O'Brien reported from Providence, Rhode Island.

Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.