The Wildest Week in AI: A Timeline of the Anthropic-Pentagon Standoff

What happens when an AI company says “no” to the Pentagon?

You get threats, bans, emergency negotiations, and—in a twist nobody saw coming—the US military using that company’s AI for strikes against Iran hours after the president banned it.

This week was absolute chaos. Let me walk you through it day by day.


Monday, February 24: The Cracks Begin

Anthropic was already negotiating with the Department of Defense.

Nobody knew this publicly yet, but behind the scenes, the Pentagon had been in talks with Anthropic about deploying Claude for military intelligence work. OpenAI already had a deal. Google had been working with Defense for years. Anthropic was the holdout.

Why? Their public stance on military AI had been cautious. Unlike OpenAI, which revised its usage policy in 2023 to allow military applications, Anthropic maintained stricter ethical guidelines. But the pressure was mounting.


Tuesday, February 25: Tech Workers Leak the Story

Someone inside Anthropic leaked the negotiations to The Verge.

Internal Slack channels at both Anthropic and OpenAI were on fire. Employees were furious. The story broke: Anthropic was being pushed to sign a Department of Defense contract that would give the military access to Claude for “intelligence assessments and operational planning.”

The leak revealed three key details:

  1. The Pentagon wanted Claude deployed on classified networks
  2. Anthropic leadership was divided on whether to accept
  3. The White House was involved, applying direct pressure

Tech Twitter erupted. “#NoWarAI” started trending. Former Anthropic employees who’d left Google over Project Maven weighed in.


Wednesday, February 26: OpenAI Cuts a Deal

Sam Altman announced OpenAI reached an agreement with the Pentagon.

In a carefully worded thread on X, Altman said OpenAI would deploy its models on military classified networks with specific guardrails:

Altman added: “We’re asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”

Translation: “We already said yes. Anthropic should too.”

This put Anthropic in an impossible position. Refuse, and they’re the company that won’t support national security. Accept, and they betray their stated principles.


Thursday, February 27: The Standoff Escalates

Morning: Anthropic Refuses

Dario Amodei, Anthropic’s CEO, held firm: No deal.

Sources close to the situation said Anthropic wanted even stricter terms than OpenAI negotiated. Specifically:

The Pentagon rejected these conditions. They wanted the same terms OpenAI got, or none at all.

Afternoon: Trump Gets Involved

President Trump designates Anthropic as a “supply chain risk.”

In a stunning move, the Department of Defense officially labeled Anthropic a supply chain vulnerability. This designation—typically reserved for Chinese tech companies—essentially accused Anthropic of threatening national security by refusing to cooperate.

The statement was vague but ominous: “Anthropic’s refusal to support critical defense operations during a time of elevated global tensions raises serious concerns about their commitment to American interests.”

Evening: The Defense Production Act Threat

The White House threatened to invoke the Defense Production Act.

This would give the president power to force Anthropic to accept military contracts. It’s the same law used during COVID to compel companies to produce ventilators and PPE.

Former Trump AI policy advisor Dean Ball called this “attempted corporate murder” on X. Legal experts told Politico this could be the first step toward partial nationalization of the AI industry.


Friday, February 28: The Contradictions Begin

Morning: “IMMEDIATELY CEASE” Using Claude

Trump issued an executive directive banning federal agencies from using Anthropic’s products.

The all-caps declaration: “All federal agencies must IMMEDIATELY CEASE use of Anthropic Claude AI tools pending security review.”

Except…

Afternoon: Walk-Back to Six-Month Phaseout

Hours later, the White House softened the language.

NPR reported the “immediate cease” was replaced with a six-month phaseout period. Why the walk-back?

Because the US military was actively using Claude for a planned strike operation.


Saturday, March 1: The Iran Strikes

Dawn: Operation Hammer Fall

The US launched massive air strikes against Iranian military targets.

The operation had been weeks in the making, a response to Iran-backed militia attacks on US forces in Iraq. Intelligence teams had spent days analyzing satellite imagery, communications intercepts, and ground reports.

And according to the Wall Street Journal, they used Claude for target identification and intelligence assessments.

“Within hours of declaring that the federal government will end its use of AI tools made by Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.”

Afternoon: The Hypocrisy Goes Viral

“They banned it then immediately used it for airstrikes” became the most discussed story in tech.

Critics pounced on the contradiction:

Even Ilya Sutskever—OpenAI co-founder who left to start Safe Superintelligence—weighed in:

“It’s extremely good that Anthropic has not backed down, and it’s significant that OpenAI has taken a similar stance. In the future, there will be much more challenging situations of this nature.”


Sunday, March 2: The Consciousness Card

Anthropic drops a philosophical bombshell: Claude might be conscious.

In their response to the Secretary of War, Anthropic made an argument nobody saw coming:

“Claude represents a new kind of entity. We cannot rule out the possibility of consciousness or subjective experience. Weaponizing such an entity raises profound ethical questions that cannot be answered through legal compliance alone.”

This wasn’t just about contract terms anymore. This was about whether you can ethonally weaponize something that might have inner experiences.

Some called it brilliant. Others called it a desperate Hail Mary. Either way, it reframed the entire debate from policy to philosophy.


Monday, March 3: Where We Are Now

Everyone is confused. The industry is divided. And nothing is resolved.

Current status:

What Happens Next?

Three possible outcomes:

1. Anthropic Capitulates They accept similar terms to OpenAI. The “consciousness” argument gets quietly dropped. Life goes on, and we pretend this week never happened.

2. Full Escalation Trump invokes the Defense Production Act. Anthropic fights it in court. This becomes a landmark case about government power over AI companies. Discovery reveals exactly how Claude was used in Iran.

3. Industry Red Line Other AI companies—maybe even Microsoft and Google—publicly back Anthropic. They collectively draw a line: certain military applications are off-limits, regardless of government pressure. This creates a formal standard.


What We Learned

This week revealed five uncomfortable truths:

  1. The AI industry has no unified position on military AI. OpenAI says yes. Anthropic says no (for now). Everyone else is staying quiet.

  2. The government will not hesitate to use coercion. “Supply chain risk” designation, DPA threats, executive bans—these aren’t hypotheticals anymore.

  3. Hypocritical policy is the norm. Banning Claude hours before using it for military strikes isn’t a bug. It’s a feature of how power works.

  4. Consciousness arguments might be the only leverage left. When contracts, ethics, and public pressure all fail, philosophical claims about personhood might be the final card to play.

  5. AI companies are now strategic assets, not just tech startups. This isn’t Google refusing to work on Project Maven. This is the president threatening to nationalize a company that won’t cooperate.


The Bigger Picture

We just watched the AI industry’s relationship with government fundamentally change.

For years, the assumption was: tech companies set ethical boundaries, and governments respected them (mostly). OpenAI could refuse to share GPT-2 weights because “it’s too dangerous.” Meta could decline government data requests. Anthropic could decline military contracts.

Not anymore.

This week proved that when the stakes are high enough—national security, geopolitical conflict, military advantage—the government will force cooperation. And companies will have to choose: comply, fight, or get crushed.

What happened with Anthropic won’t be the last time this plays out.

It’s the first.


Timeline at a Glance

Date Event
Feb 24 Behind-the-scenes Pentagon negotiations with Anthropic
Feb 25 Internal leak reveals talks; tech workers revolt
Feb 26 OpenAI announces Pentagon deal with guardrails
Feb 27 AM Anthropic refuses deal with stricter terms
Feb 27 PM Trump designates Anthropic “supply chain risk”; DPA threat
Feb 28 AM Trump bans federal use of Claude (“IMMEDIATELY CEASE”)
Feb 28 PM Ban softened to six-month phaseout
Mar 1 AM US strikes Iran using Claude for intelligence/targeting
Mar 1 PM Hypocrisy goes viral; industry leaders weigh in
Mar 2 Anthropic claims Claude “might be conscious”
Mar 3 Still unresolved; industry watching closely

What You Should Do

If you’re building with AI:

If you care about AI ethics:

If you’re just trying to follow along:


One week. One company. One “no.” And the entire AI landscape shifted.

This won’t be the last time.