Why OpenAI's Robotics Chief Just Quit Over the Pentagon Deal
OpenAI’s head of robotics just walked out the door. Everyone’s asking the same questions. Here are the answers.
Q: Who resigned and when?
A: Caitlin Kalinowski, OpenAI’s head of robotics and consumer hardware, announced her resignation on Saturday, March 7, 2026.
Why this matters: Kalinowski wasn’t some junior employee. She came from Meta’s Reality Labs (where she led hardware for Quest VR headsets) and joined OpenAI specifically to build their robotics division. This was a big hire less than a year ago.
Example: Imagine the head of Apple’s car project quitting right before launch. That’s the scale of this departure.
Q: Why did she quit?
A: She cited concerns about OpenAI’s agreement with the Department of Defense. Specifically, she said OpenAI “did not take enough time before agreeing to deploy its AI models on the Pentagon’s classified cloud networks.”
Why this matters: This isn’t about whether AI should work with the military (reasonable people disagree). It’s about process. Kalinowski’s complaint: OpenAI rushed into a major ethics decision without adequate deliberation.
Example: In her X post, she said the deal didn’t do enough to protect Americans from warrantless surveillance and that granting AI “lethal autonomy without human authorization” deserved “more deliberation.”
Q: What exactly is this Pentagon deal?
A: OpenAI reached an agreement to deploy its AI models (like GPT) on the Department of Defense’s classified networks. The deal allows military personnel to use ChatGPT-style tools on secure government systems.
Why this matters: This represents a major policy shift. OpenAI previously had language in its usage policies restricting military applications. Now they’re actively partnering with the Pentagon.
Example: Before: “Don’t use our AI for weapons.” Now: “Here’s our AI running on your classified military networks.”
Q: Is OpenAI the only AI company working with the military?
A: No. The difference is how they’re doing it.
- Anthropic (Claude) got designated a “supply chain risk” by the Pentagon for refusing to play ball the way OpenAI did
- Meta, Google, and others have various military-adjacent research contracts
- Defense contractors are already backing off Claude and switching to OpenAI “out of an abundance of caution”
Why this matters: The Pentagon is actively choosing winners and losers in AI based on compliance with their demands. OpenAI chose to comply. Anthropic chose not to. Now we’re seeing the consequences.
Q: What did Sam Altman say about this?
A: Altman posted on X that he wanted to add language to the Pentagon agreement addressing concerns about mass domestic surveillance. Specifically, he said the new wording would include protections, though it still contained the phrase “consistent with applicable laws.”
Why this matters: “Consistent with applicable laws” is doing a lot of work in that sentence. If the laws allow warrantless surveillance (and in many cases, they do), then OpenAI’s AI could enable it while technically following Altman’s proposed guardrails.
Example: It’s like saying “We won’t spy on Americans, except where legally permitted.” Cool, so… you’ll spy on Americans.
Q: Why is everyone talking about Anthropic?
A: Because Anthropic is now facing serious business consequences for not taking the Pentagon deal.
- Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk”
- Defense contractors are abandoning Claude preemptively
- Anthropic CEO Dario Amodei sent a scathing memo to employees suggesting the blowup happened because “we haven’t donated to Trump” and “we haven’t given dictator-style praise to Trump”
Why this matters: This isn’t just about ethics anymore. It’s about whether AI companies can afford to have ethics when the government is picking winners.
Example: Anthropic’s usage is “booming” (breaking daily signup records) despite the “supply chain risk” label. Turns out a lot of people want the AI company that told the Pentagon to slow down.
Q: What does “lethal autonomy without human authorization” mean?
A: It means AI systems making kill decisions without a human in the loop.
Why this matters: This is the bright red line in AI ethics debates. Even people who support military AI generally agree that automated weapons deciding who to kill is a bad idea.
Example: Think drone strikes, but the AI decides the target and pulls the trigger with no human approval. That’s what Kalinowski flagged as needing “more deliberation.”
Q: Did OpenAI respond to Kalinowski’s resignation?
A: Not publicly (as of Saturday evening). The company has been radio silent on her departure.
Why this matters: This is not standard practice for a high-profile executive departure. Usually you get a “we wish them well” statement. The silence suggests this exit was contentious.
Q: Are other OpenAI employees quitting over this?
A: Not publicly (yet). But protests are planned outside OpenAI’s offices, and the grassroots QuitGPT campaign says 1.5+ million people have taken action (sharing on social or signing up for the boycott).
Why this matters: If this becomes a talent retention problem, OpenAI will feel it. Top AI researchers have options. They don’t have to work at the company partnering with the Pentagon.
Q: What happens next?
A: Three scenarios:
-
OpenAI doubles down — They proceed with the Pentagon deal, accept the employee/user backlash, and focus on being the “defense-friendly” AI company.
-
OpenAI adds guardrails — They implement Altman’s proposed language updates and hope that’s enough to calm critics.
-
OpenAI reverses course — Unlikely, but possible if the backlash gets severe enough.
Why this matters: How OpenAI handles this will set a precedent for every other AI lab. Do ethics concerns matter when the Pentagon comes knocking? Or does money (and political favor) override everything?
What Most People Get Wrong
The wrong question: “Should AI companies work with the military?”
The right question: “Should AI companies rush into military deals without transparent deliberation and clear ethical guardrails?”
Kalinowski’s resignation isn’t anti-military. It’s anti-reckless. She’s saying: slow down, think this through, set boundaries. OpenAI chose speed over scrutiny.
What Experts Say
“OpenAI did not take enough time before agreeing to deploy its AI models on the Pentagon’s classified cloud networks.” — Caitlin Kalinowski, former head of robotics at OpenAI
“We haven’t donated to Trump. We haven’t given dictator-style praise to Trump.” — Dario Amodei, Anthropic CEO, on why Anthropic was designated a supply chain risk
“Granting AI lethal autonomy without human authorization [is] a line that deserved more deliberation.” — Caitlin Kalinowski
TL;DR
- Who: Caitlin Kalinowski, OpenAI’s head of robotics
- What: Resigned over OpenAI’s Pentagon deal
- Why: Said the company rushed the decision without adequate ethical deliberation
- Main concern: Warrantless surveillance and lethal autonomous weapons
- OpenAI’s response: (crickets)
- Anthropic’s situation: Got blacklisted for refusing a similar deal
- What it means: AI companies now face a choice—ethics or access. OpenAI chose access.
