Latest

Trump directs US agencies to toss Anthropic’s AI as Pentagon calls startup a supply risk


Trump directs US agencies to toss Anthropic’s AI as Pentagon calls startup a supply risk

US President Donald Trump said on Friday he is directing the government to stop work with Anthropic, and the Pentagon said it would declare the start-up a supply-chain risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails.

Trump added there would be a six-month phase-out for the Defence Department and other agencies that use the company’s products. If Anthropic does not help with the transition, Trump said, he would use “the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow”.

The actions mark an extraordinary rebuke by the United States against one of the premier companies that has kept it in the lead on national security-critical AI, threatening to give Anthropic a pariah status that Washington until now had reserved for enemy suppliers. Google and Amazon are among Anthropic’s financial backers.

The moves further set a precedent that US law alone would constrain how AI is deployed on the battlefield, with the Pentagon seeking to preserve all flexibility in defence and not be limited by warnings from the technology’s creators against powering weapons with unreliable AI.

In a statement, Anthropic said it would challenge any risk designation in court by the Department of Defence, which the Trump administration has renamed the Department of War.

“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government,” the company said.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”

Late on Friday, rival OpenAI, which is backed by Microsoft, Amazon and others, announced its own deal to deploy technology in the Defence Department’s classified network.

CEO Sam Altman on X said the Pentagon shared its principles for human responsibility over weapon systems and for having no mass US surveillance.

“We put them into our agreement,” Altman said of the points. “We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted.”

It was not immediately clear whether these contractual details differed from the red lines proposed by Anthropic. The Pentagon and OpenAI did not immediately respond to a request for comment.

‘Nuclear War’

Defence Secretary Pete Hegseth said Anthropic would be designated a supply-chain risk, following an impasse in months of talks on whether the company’s policies could constrain military action.

Meeting with Hegseth this week, Anthropic CEO Dario Amodei argued for weapons and surveillance limits and irked Pentagon officials. The Pentagon said US law, not a private company, would determine how to defend the country.

The designation could bar tens of thousands of contractors from using Anthropic’s AI when working for the Pentagon. That represents an existential threat to its business with the government and could harm its private-sector relationships, said Franklin Turner, an attorney who specialises in government contracts.

“Blacklisting Anthropic is the contractual equivalent of nuclear war,” he said.

Similar US action was taken to remove Chinese tech giant Huawei from the Pentagon’s supply chains. Starting in 2017, the US restricted Defence Department use of Huawei equipment, prohibited federal agencies from purchasing its technology, and halted federal grant and loan funds for Huawei equipment.

Anthropic has raced to win fierce competition to sell novel technology to businesses and government, particularly for national security, ahead of a widely expected initial public offering. The company has said it has not finalised an IPO decision.

Saif Khan, who served in the National Security Council in former President Joe Biden’s White House, said the Defence Department’s action “may be the most draconian domestic AI regulation any government has ever issued.” “The Department is arguably treating Anthropic as a greater national security threat than any Chinese AI companies, none of whom they’ve designated supply-chain risks,” Khan said.

‘Killer Robots’

Tech companies and the Pentagon have repeatedly locked horns since at least 2018, when employees at Google protested against the Pentagon’s use of its AI to analyse drone footage. A rapprochement ensued with companies including Amazon and Microsoft jousting for defence business, while several big-tech CEOs pledged co-operation last year with the Trump administration.

But theoretical “killer robots” have worried human-rights and technology activists as wars in Ukraine and Gaza showcased increasingly automated systems.

Bolder US military action in the past year has added to these concerns, said Jack Shanahan, who had directed the Pentagon’s algorithmic warfare effort Project Maven.

“People might be a little bit more nervous about no restrictions,” Shanahan said. The White House’s legal sign-off could be “top cover for anybody that does anything that would potentially result in lack of due process, civilian casualties, collateral damage.”

The Pentagon signed agreements worth up to $200 million each with major AI labs in the past year, including Anthropic, OpenAI and Google.

Anthropic, negotiating for awards under that ceiling, had flagged concerns about the legal system’s ability to keep up with AI progress. At present, for instance, US laws do not prevent the use of technology to compile seemingly innocuous data to reveal information about people’s private lives, its CEO Amodei has said.

Anthropic’s AI has been in use across the intelligence community and armed services, and it was first among peer AI companies to work with classified information, through a supply deal via cloud provider Amazon.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button