How talks between Anthropic and the Defense Dept. fell apart



Wednesday, March 4, 2026-Negotiations between AI firm Anthropic and the U.S. Defense Department broke down over a disagreement about how the company’s Claude artificial intelligence model could be used in military contexts. 

The Pentagon pushed for a contract that would allow it to use Claude for all lawful purposes — including scenarios that could involve autonomous weapons or extensive data analysis — without restrictions. Anthropic’s leadership, led by CEO Dario Amodei, refused to agree to those terms, citing ethical concerns and insisting that Claude not be deployed for mass domestic surveillance or fully autonomous lethal systems.

The standoff escalated into a high-stakes deadline. The Pentagon set a Friday evening cutoff, warning it could label Anthropic a “supply chain risk” and potentially remove the company from government contracts if it did not agree. Anthropic publicly stated it could “not in good conscience accede” to the Defense Department’s demands, saying that new contract language offered “virtually no progress” on safeguarding its AI’s ethical guardrails.

After the breakdown, the Trump administration ordered federal agencies to stop using Anthropic’s technology, and several departments began shifting to AI models from competitors instead.

In response, OpenAI secured its own agreement with the Defense Department under terms allowing broad use of its models while pledging safeguards against prohibited applications. The dispute underscores growing tensions between AI safety commitments and the military’s demand for expansive operational access to advanced artificial intelligence tools.

Post a Comment

0 Comments