Where We Stand: Anthropic, the Military, and Two Lines We Won't Cross

Where We Stand: Anthropic, the Military, and Two Lines We Won’t Cross

We’ve built AI for the U.S. military, and we’re proud of it. But the Pentagon is now asking us to remove safeguards we believe protect American lives and democracy. Here’s why we said no.

Published February 26, 2026

First, some context: we’re deeply committed to national defense

Anthropic believes that AI is one of the most important technologies in the world right now, and that the United States and its democratic allies need to lead in developing it safely. That’s not just talk. We’ve put it into action.

We were the first frontier AI company to deploy our AI on classified U.S. government networks. The first to bring it to the National Laboratories. The first to build custom AI tools for national security customers. Today, our AI Claude is used across the Department of Defense for things like intelligence analysis, military planning, cybersecurity, and more.

We’ve also made real financial sacrifices for national security. We turned down hundreds of millions of dollars in revenue by cutting off access to Claude for companies connected to the Chinese Communist Party. We’ve fought back against CCP-sponsored cyberattacks on our systems. And we’ve publicly supported export controls on advanced computer chips to help keep America ahead.

We are not anti-military. We are not trying to run the Pentagon. Military decisions belong to the military — not us.

So what’s the dispute?

The Department of Defense is now saying it will only work with AI companies that agree to allow any lawful use of their AI, which means removing two specific safeguards we have in place.

Those two safeguards cover situations where we believe AI, right now, does more harm than good, even for national security. The Pentagon wants them gone. We’ve refused. And they’ve threatened serious consequences.

The two things we won’t do: 1. Enable mass surveillance of American citizens
2. Powerfully autonomous weapons that remove humans from life-or-death decisions

Why we won’t allow mass domestic surveillance

We fully support using AI for lawful intelligence work — tracking foreign threats, countering espionage, and protecting national security. That’s legitimate and important.

But “mass domestic surveillance” is something different. It means using AI to automatically monitor the movements, web browsing, and personal associations of ordinary Americans, at a massive scale, without warrants, and without most people knowing it’s happening.

Here’s something that might surprise you: this is currently legal in the United States. The government can buy detailed records about Americans from commercial data brokers without a warrant, because the law hasn’t kept up with modern technology. Even the intelligence community has admitted this raises serious privacy concerns, and there’s bipartisan pushback in Congress.

Powerful AI makes this vastly more dangerous. Scattered pieces of data that seem harmless on their own, a location ping here, a website visit there — can now be stitched together by AI into a detailed portrait of any person’s life, automatically and at a scale that was never before possible.

That’s a threat to the very democratic values we’re supposedly trying to defend. We won’t build that system.

Why we won’t have powerfully autonomous weapons yet

This one is more nuanced. We’re not opposed to autonomous weapons in principle. “Partially autonomous” weapons systems that assist human soldiers but keep a person in the decision loop are already being used effectively in conflicts like the war in Ukraine. We support that.

“Fully autonomous” weapons are different. These are systems that would select a target and pull the trigger — or drop the bomb — entirely on their own, with no human making that final call.

We believe this may eventually be necessary for national defense. But today’s AI, including ours, simply isn’t reliable enough for that responsibility. The errors AI makes can be catastrophic when the stakes involve human lives. Our professional military applies judgment, ethics, and accountability that AI systems today cannot replicate.

“We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”

We’ve offered to work directly with the Department of Defense on research to improve AI reliability for these applications. They declined. But our position stands: fully autonomous lethal weapons need better AI than exists today, and proper oversight guardrails that don’t yet exist. We won’t deploy something we believe is unsafe.

What the Pentagon threatened and why it doesn’t change our answer

The Department of Defense hasn’t just asked us to reconsider. They’ve made specific threats:

They said they will remove Anthropic from their systems if we keep these safeguards. They’ve also threatened to label us a “supply chain risk” — a designation that has only ever been used for foreign adversaries, never for an American company. And they’ve threatened to invoke the Defense Production Act to force us to comply.

We noted the contradiction ourselves: you can’t simultaneously claim that we’re a national security risk and that our AI is essential to national security. Both can’t be true.

But regardless of the threats, our answer is the same. We can’t in good conscience agree to their request.

What happens now

It’s the Pentagon’s right to choose which companies they work with. We respect that. Our strong preference is to keep working with the Department of Defense and the men and women who serve, just with these two safeguards in place.

If they decide to remove us, we will do everything we can to make the transition smooth. We won’t leave warfighters without support. Our models will remain available under the generous terms we’ve proposed for as long as needed.

We believe deeply in American security. We believe in democracy. And we believe that sometimes defending those values means saying no — even to the people asking in their name.

We remain ready to serve. This post is adapted from Anthropic’s official statement to the Department of Defense, dated February 26, 2026.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top