Anthropic, the San Francisco-based AI firm, on Thursday rejected the Pentagon’s ultimatum over navy use of its flagship product, Claude AI. The US Protection Division demanded unrestricted entry to Claude’s capabilities, however Anthropic refused, regardless of threats of blacklisting.
“These threats don’t change our place: we can not in good conscience accede to their request,” Anthropic CEO Dario Amodei stated in a press release.
The confrontation successfully jeopardises Anthropic’s long-standing defence contracts with the federal government.
The dispute between the Pentagon and Anthropic stems from the AI startup’s refusal to place down sure guardrails that will permit the US navy to autonomously use focused weapons and conduct mass surveillance in the US.
“To our information, these two exceptions haven’t been a barrier to accelerating the adoption and use of our fashions inside our armed forces up to now,” Amodei argued within the assertion.
Anthropic’s assertion comes only a day earlier than the Pentagon’s deadline set by Protection Secretary Pete Hegseth, with whom Amodei met earlier this week.
The Protection Division had given Anthropic an ultimatum: comply with unconditional navy use of its know-how, even when it violates moral requirements on the firm, or face being pressured to conform underneath emergency federal powers.
Why is Anthropic refusing to offer in to Pentagon’s calls for?
Anthropic, backed by Amazon and Google, has a contract price as much as $200 million with the US Division of Protection. Nonetheless, Amodei on Thursday stated his firm will draw an moral line concerning its use for mass surveillance of US residents and totally autonomous weapons, even when it implies that the contract is misplaced.
The division has stated it can contract solely with AI corporations that accede to “any lawful use” and take away safeguards, Amodei stated in his assertion. “Utilizing these methods for mass home surveillance is incompatible with democratic values.”
He stated main AI methods are usually not but dependable sufficient to be trusted with the facility to launch lethal weapons with none human interference.
“We is not going to knowingly present a product that places America’s warfighters and civilians in danger,” Amodei stated.
Anthropic vs Pentagon
After assembly with Anthropic early this week, the Pentagon delivered a stark ultimatum: comply with unrestricted navy use of its know-how by 5:01 pm (22:01 GMT) Friday or face being pressured to conform underneath the Protection Manufacturing Act.
Earlier on Thursday, Pentagon spokesperson Sean Parnell stated on X that the division has no real interest in utilizing AI to conduct mass surveillance of People, nor does it wish to use AI to develop autonomous weapons that function with out human involvement.
“Here is what we’re asking: Permit the Pentagon to make use of Anthropic’s mannequin for all lawful functions,” Parnell stated.
The Pentagon additionally threatened to label Anthropic a provide chain danger, a designation often reserved for companies from adversary international locations that might severely harm the corporate’s potential to work with the US authorities and repute.
Nonetheless, Anthropic has refused to maneuver from its place. “It’s the Division’s prerogative to pick out contractors most aligned with their imaginative and prescient,” Amodei stated in his assertion.










