Pentagon summons Anthropic Chief in dispute over A.I. limits



Wednesday, February 25, 2026- In a rapidly escalating standoff that could reshape how the U.S. military leverages artificial intelligence, U.S. Defense Secretary Pete Hegseth has summoned Anthropic CEO Dario Amodei to the Pentagon for what officials describe as a high‑stakes, confrontational meeting over restrictions on the use of Anthropic’s Claude AI in military operations. 

The talks, held on Tuesday, February 24, 2026, come amid mounting pressure from the Pentagon for AI companies to loosen usage limits and make their most advanced systems available for all lawful defense purposes — even as Anthropic pushes back on ethical grounds.

At the heart of the dispute is Claude’s ethical safeguard policy, which currently prohibits deployment for fully autonomous weapons systems and mass surveillance of U.S. citizens — rules that Anthropic says are integral to responsible AI development. The Pentagon, however, views these guardrails as obstacles to operational flexibility, demanding unfettered access to Claude under its contracts. 

Officials have even threatened to designate Anthropic a “supply chain risk,” a label typically reserved for adversarial entities, or invoke the Defense Production Act to compel compliance if the company doesn’t bend to military requirements by a Friday deadline.

This confrontation unfolds against a backdrop of fierce competition among AI developers to win Pentagon trust and contracts, with some rivals already agreeing to broader terms. Claude’s current role as the only major AI model on classified defense networks gives the U.S. military strategic value from its capabilities, but that position now hangs in the balance. 

As the deadline looms, the outcome could disrupt not only U.S. defense partnerships but also set a defining precedent for how commercial AI firms balance ethical restrictions with national security demands.

Post a Comment

0 Comments