Protection Secretary Pete Hegseth has reportedly given Anthropic chief govt Dario Amodei till Friday (27 February) night to grant the US navy unfettered entry to its flagship AI mannequin, Claude, or face extreme penalties, in accordance with a report by Axios.
The ultimatum, delivered throughout a tense assembly on Tuesday (24 February), as Axios reported, citing officers within the know, underscores a widening rift over the boundaries of AI safeguards in nationwide safety operations. At stake is the Pentagon’s continued entry to Claude — presently the one AI mannequin embedded in its most delicate labeled programs.
Hegseth established the Friday deadline, making a essential inflection level within the relationship between Silicon Valley and the US nationwide safety state. The Pentagon seems decided to claim operational supremacy in AI deployment, whereas Anthropic continues to defend guardrails designed to restrict sure types of navy use.
Pentagon pressures Anthropic over AI safeguards
In accordance with the Axios report, Hegseth warned that the Division of Protection might both sever ties with Anthropic and formally designate the corporate a “provide chain threat,” or invoke the Protection Manufacturing Act (DPA) to compel the agency to tailor its mannequin to navy necessities.
“The one motive we’re nonetheless speaking to those folks is we’d like them and we’d like them now. The issue for these guys is they’re that good,” a Protection official instructed Axios forward of the assembly.
The warning marks one of the direct confrontations so far between Washington and a non-public AI developer over the permissible scope of navy AI use. Whereas Anthropic has signalled a willingness to regulate its utilization insurance policies for defence purposes, it has refused to permit Claude to be deployed for the mass surveillance of People or for weapons programs working with out human involvement.
Categorized programs and operational dependency on Claude AI
Claude’s integration into labeled Pentagon programs has created a strategic dependency that complicates any menace to terminate the connection. The mannequin is reportedly utilized in each extremely delicate operational contexts and a variety of bureaucratic navy capabilities.
One supply accustomed to the discussions indicated that, at current, Claude seems to steer competing fashions in a number of purposes related to navy planning, together with offensive cyber capabilities.
The Pentagon is alleged to be accelerating discussions with OpenAI and Google to transition their fashions — already utilized in unclassified settings — into labeled environments. Gemini has emerged as a possible different, although such an association would require Google to allow the Pentagon to make use of its system for “all lawful functions,” phrases that Anthropic has declined.
Elon Musk’s xAI has not too long ago secured a contract to introduce Grok into labeled settings, but it surely stays unclear whether or not that system might totally exchange Claude’s present capabilities.
The Protection Manufacturing Act: A uncommon adversarial software
The Defence Manufacturing Act grants the president authority to compel personal firms to just accept and prioritise contracts deemed mandatory for nationwide defence. It was notably used in the course of the Covid-19 to broaden manufacturing of vaccines and ventilators.
Nevertheless, deploying the legislation in a coercive method towards a know-how firm over AI safeguards would symbolize an uncommon and adversarial software. A senior Defence official recommended the target could be to compel Anthropic to adapt its mannequin to Pentagon necessities with out extra guardrails.
Anthropic might problem such motion in courtroom, arguing that its product constitutes specialised, custom-built software program for delicate authorities use quite than a commercially obtainable good topic to DPA compulsion, in accordance with one defence advisor cited within the report.
Dispute over Venezuela operation deepens friction
Tensions have been additional infected by a dispute surrounding Claude’s alleged use throughout a Venezuela operation performed via Anthropic’s partnership with Palantir.
Hegseth reportedly referenced the Pentagon’s declare that Anthropic raised considerations to Palantir concerning the mannequin’s deployment in the course of the Maduro raid. Amodei denied the allegation.
Amodei denied that Anthropic raised any such considerations and even broached the subject with Palantir past customary working conversations.
He reiterated that the corporate’s purple strains have by no means prevented the Pentagon from doing its work or posed a problem for anybody working within the subject.
Sources differed of their characterisation of Tuesday’s assembly. A senior Protection official described it as “not heat and fuzzy in any respect.” One other supply stated it remained “cordial” with no raised voices and that Hegseth praised Claude on to Amodei.
Hegseth made clear that he wouldn’t enable any personal firm to dictate the phrases underneath which the Pentagon makes operational selections or object to particular person use circumstances.
Provide chain threat designation looms
Ought to the Pentagon sever its contract and label Anthropic a provide chain threat, the repercussions would prolong past the corporate itself. Different defence contractors would doubtless be required to certify that Claude shouldn’t be embedded inside their workflows — a posh endeavor given the mannequin’s present integration throughout programs.
Anthropic maintained a conciliatory tone after the assembly.
“In the course of the dialog, Dario expressed appreciation for the Division’s work and thanked the Secretary for his service,” an Anthropic spokesperson stated.
“We continued good-faith conversations about our utilization coverage to make sure Anthropic can proceed to assist the federal government’s nationwide safety mission in keeping with what our fashions can reliably and responsibly do.”










