Claude Sonnet 4.6: The Model That Makes Paying for Opus Hard to Justify

Anthropic just quietly rewired what “default” means for millions of Claude users — and the implications are bigger than the launch post made them sound.

On February 17th, 2026, Anthropic replaced Claude Sonnet 4.5 with Sonnet 4.6 as the default model for every Free and Pro Claude user. That’s not a minor point update. Sonnet 4.6 is a full-capability upgrade across coding, computer use, long-context reasoning, agent planning, and visual design. It also brings a 1-million-token context window in beta — enough to hold entire codebases, stacks of research papers, or a year’s worth of customer contracts in a single request.

The pricing didn’t change. $3 per million input tokens, $15 per million output tokens — same as Sonnet 4.5. What changed is what you get for that price.

And here’s the part that should make you stop scrolling: early users are reporting that Sonnet 4.6 beats Opus 4.5 — Anthropic’s previous frontier model — in head-to-head developer preference tests. Not just occasionally. 59% of the time in Claude Code tasks. If that number holds at scale, Anthropic may have just made the $75/month Pro tier feel like the Opus tier did six months ago. That’s either a very good deal for users, or a very aggressive depreciation of their own top-tier product. Maybe both.

Let’s get into it.


Table of Contents


The Context: Sixteen Months of Computer Use

To understand why Sonnet 4.6 matters, you need to remember where Anthropic started with computer use.

In October 2024, Anthropic launched the first general-purpose computer-using AI model. They were upfront about its limitations — “still experimental, at times cumbersome and error-prone” were their own words. It was impressive as a demo and frustrating in practice. You could watch Claude wiggle a mouse around a screen and fill in a form, but you wouldn’t trust it with anything important.

Sixteen months later, the trajectory is steep. The benchmark Anthropic uses — OSWorld — tests AI models on hundreds of real-world tasks inside a simulated computer running Chrome, LibreOffice, VS Code, and other everyday software. No special APIs, no purpose-built shortcuts. Just screen, mouse, keyboard. The way a human would do it.

Sonnet 4.6’s OSWorld scores represent a significant jump from Sonnet 4.5. And more importantly, the benchmark numbers are starting to be echoed in real-world reports: early access users describe Sonnet 4.6 reaching “human-level capability” on tasks like navigating complex spreadsheets or filling out multi-step web forms across multiple browser tabs.

That’s the arc. From “impressive demo” to “genuinely useful” in about a year and a half. At this rate, the next sixteen months get interesting.


The Technology: What’s Actually New

Anthropic’s announcement was characteristically light on architecture details, but here’s what we know is actually different:

🔷 1M Token Context Window (Beta)

The headline capability upgrade. One million tokens is roughly:

What matters isn’t just the size — previous models often had context windows that technically fit but performed poorly at the edges. Anthropic specifically claims Sonnet 4.6 “reasons effectively across all that context,” which is meaningfully different from just accepting the tokens.

The 1M context window is currently in beta, which means you may hit rate limits or access constraints depending on your usage tier.

🔷 Improved Prompt Injection Resistance

Computer use creates a specific security risk: malicious instructions hidden in web pages or documents can attempt to hijack what the AI does. Anthropic calls this a “prompt injection attack,” and it’s a real threat for any agentic workflow where Claude browses the web or processes external documents.

Sonnet 4.6’s safety evaluations show “major improvement” in prompt injection resistance compared to Sonnet 4.5 — the report says it now performs similarly to Opus 4.6 in this dimension. For anyone deploying Claude in agentic contexts, that’s not a footnote. That’s a prerequisite.

🔷 Instruction Following and Consistency

Developer feedback during early access consistently flagged the same two things: Sonnet 4.6 reads context before modifying code rather than diving in blind, and it consolidates shared logic rather than duplicating it. These sound like small things. Over a three-hour coding session, they’re the difference between a model you trust and a model you’re constantly cleaning up after.

🔷 Visual Output Quality

An unexpected highlight from early customer reports: frontend and visual outputs from Sonnet 4.6 are described as “notably more polished” with better layouts, animations, and design sensibility. Customers reported needing fewer rounds of iteration to hit production quality. The model apparently reaches for modern tooling unprompted — one enterprise customer noted it produced “the best iOS code we’ve tested” with better spec compliance and architecture than requested, “all in one shot.”


The Benchmarks: Numbers That Matter

Let’s look at what Anthropic and their enterprise partners are actually reporting.

Developer Preference (Claude Code)

Comparison Sonnet 4.6 Win Rate
Sonnet 4.6 vs. Sonnet 4.5 ~70%
Sonnet 4.6 vs. Opus 4.5 (Nov 2025 frontier) 59%

This is preference data, not benchmark scores — meaning humans chose Sonnet 4.6 as producing better results the majority of the time. The specific complaints about Sonnet 4.5 that Sonnet 4.6 addressed: overengineering, “laziness” (failing to complete tasks fully), false claims of success, and hallucinated function calls.

Enterprise Benchmarks

Domain Result
OSWorld (computer use) Significant improvement over Sonnet 4.5
OfficeQA (document comprehension) Matches Opus 4.6
Insurance computer use benchmark 94% accuracy
Financial Services Benchmark “Significant jump” in answer match rate vs. Sonnet 4.5
Heavy Reasoning Q&A (enterprise documents) +15 percentage points over Sonnet 4.5
Vending-Bench Arena (long-horizon business simulation) 1st place

The insurance number deserves a call-out. 94% accuracy on computer use for “submission intake and first notice of loss” workflows is not a research demo. That’s an enterprise-grade claim in a domain where accuracy is genuinely mission-critical.

The Vending-Bench Result

Vending-Bench Arena is a competitive evaluation where different AI models manage a simulated business over time, making decisions about inventory, pricing, and investment, all while competing against each other.

Sonnet 4.6 took a distinctive approach: it invested heavily in capacity for the first ten simulated months — spending significantly more than competitors — then pivoted sharply to focus on profitability in the final stretch. The timing of the pivot was enough to win the competition.

This is worth noting not because simulated vending machine businesses matter, but because it suggests the model developed a coherent, multi-step strategy without explicit prompting. That’s the kind of long-horizon reasoning that’s genuinely hard to benchmark, and genuinely important for agentic workflows.


The Use Cases: Where It Actually Shines

Based on enterprise feedback compiled in Anthropic’s launch post, here’s where Sonnet 4.6 is outperforming expectations:

🧑‍💻 Coding at Scale

📄 Document-Heavy Enterprise Workflows

🤖 Agentic Tasks

🎨 Creative / Visual


The Concerns: What Could Go Wrong

Sonnet 4.6 is impressive, but a few things are worth tracking:

1. The 1M Context Window Is Beta Large context windows are great in theory and sometimes rough in practice. “Beta” here means you should test before you build production workflows around it. Latency, cost, and reliability at scale are unknowns until more people push on them.

2. Preference Data ≠ Benchmark Data The 70% and 59% preference numbers come from developer surveys in Claude Code tasks. Self-reported preference in a controlled setting doesn’t always generalize. It’s evidence, not proof.

3. Computer Use Still Has Limits Anthropic says Sonnet 4.6 “still lags behind the most skilled humans at using computers.” 94% on an insurance benchmark is great — but 6% error rates in mission-critical workflows are still significant. Human oversight is still required.

4. The Opus 4.6 Question Anthropic’s current top-tier model is Opus 4.6 (released Feb 5). If Sonnet 4.6 approaches Opus 4.5 performance, how much gap is left with Opus 4.6? Anthropic says Sonnet 4.6 matches Opus 4.6 on OfficeQA — but doesn’t claim broad parity. For most users, this gap probably doesn’t matter. For frontier research and the hardest reasoning tasks, it might.

5. The DoD Drama in the Background Separately from the model launch: reports emerged this week that the Department of Defense may designate Anthropic as a “supply chain risk” — a political/regulatory headwind that has nothing to do with model quality but could matter for enterprise customers in regulated industries.


The Verdict: Who Should Care, and How Much

Free Claude users: You just got a significant upgrade without doing anything. The model you’re using is now meaningfully better at everything — especially if you use Claude for code, documents, or any task that requires following complex instructions over a long session. The 1M context window will probably roll out more broadly over time.

Pro Claude users: The question you should be asking is: was I paying for Opus when I should have been using Sonnet? Based on developer preference data, the honest answer for most coding and document tasks is yes. Test Sonnet 4.6 on your actual workflows before defaulting to Opus 4.6.

Claude API / Enterprise users: The performance-to-cost ratio shifted significantly in your favor. $3/$15 per million tokens was already competitive. If Sonnet 4.6 is genuinely closing the gap with Opus-tier performance on your workflows, you should be running evals this week.

Builders using computer use: This is the release to take seriously. 94% insurance benchmark, improved prompt injection resistance, human-level spreadsheet navigation — computer use is crossing from “experimental” to “production-viable” territory for specific, well-defined workflows. The key phrase is well-defined — you still need to specify exactly what you want and build in error handling.

Bottom line: Sonnet 4.6 is the best value model Anthropic has ever shipped. If you’re using Claude for work, the upgrade is free and immediate. If you’re building on Claude, it’s time to re-run your model selection math.


Appendix: Specs, Pricing, Links

Claude Sonnet 4.6 Quick Reference

Spec Value
Release date February 17, 2026
Default for Free + Pro Claude users
Input pricing $3 per million tokens
Output pricing $15 per million tokens
Context window 1M tokens (beta)
Prior default Claude Sonnet 4.5

Context

This launch comes days after Anthropic’s $30 billion Series G funding round (Feb 12), valuing the company at $380 billion post-money. The model cadence — Opus 4.6 on Feb 5, Sonnet 4.6 on Feb 17 — suggests Anthropic is in a fast-iteration phase. Expect the current performance gap between Sonnet and Opus to narrow further with each release cycle.