Sam Altman Says OpenAI Has 'Basically Built AGI' – Then Immediately Walks It Back

In what might be the most consequential—and confusing—statement in AI’s recent history, OpenAI CEO Sam Altman declared in a Forbes profile published this week that “we basically have built AGI, or very close to it.”

Then, just days later, he walked it back. Sort of.

If you’re scratching your head wondering what’s happening at the world’s most influential AI company, you’re not alone. Let’s break down what Altman actually said, why it matters, and what this tells us about the state of artificial intelligence in 2026.

The Statement That Launched a Thousand Tweets

During an interview with Forbes chronicling his chaotic journey through the AI landscape, Altman made the bombshell claim about achieving Artificial General Intelligence (AGI)—the holy grail of AI research that represents human-level intelligence across all cognitive tasks.

For context, AGI isn’t just about making better chatbots or image generators. It’s about creating AI systems that can:

In other words, it’s the difference between a calculator that’s really good at math and a human who can do math, write poetry, plan a vacation, and debug code—all with the same underlying intelligence.

The Immediate Backpedal

Before the tech world could fully process what Altman said, he dialed things back. According to the same Forbes piece, Altman clarified a few days later:

“I meant that as a spiritual statement, not a literal one.”

He went on to explain that achieving true AGI will require “a lot of medium-sized breakthroughs,” though notably, he believes “we don’t need a big one.”

This is classic Altman—provocative, headline-grabbing, then carefully qualified. But even his clarification reveals something important about OpenAI’s current thinking.

What Does “Spiritual AGI” Even Mean?

Altman’s distinction between “spiritual” and “literal” AGI might sound like corporate double-speak, but it actually highlights a real debate happening in AI research right now.

The Capabilities Are There (Sort Of)

Current models like GPT-4 and its successors can:

From a certain angle, that looks a lot like general intelligence. You can throw almost any cognitive task at GPT-4, and it’ll give you a reasonable attempt.

But the Gaps Are Obvious

At the same time, these models:

So are we at AGI? It depends entirely on how you define it—which is exactly Altman’s point. Spiritually, we might be there. Literally, we’re clearly not.

Why This Matters Beyond the Hype

Altman’s comments aren’t just about semantics. They reveal three critical things about where AI is heading:

1. The Bar for AGI Keeps Moving

As AI capabilities improve, our definition of AGI seems to shift. Tasks that would have seemed like AGI a decade ago—like passing the bar exam or writing coherent essays—are now routine for large language models.

This mirrors what happened with “artificial intelligence” itself. In the 1950s, a computer that could play chess was considered AI. By the 1990s, Deep Blue beating Kasparov was “just” sophisticated algorithms. Today, GPT-4 can play chess and explain the historical significance of the game, but we still don’t call it “intelligent” in the fullest sense.

2. OpenAI Believes the Path Is Clear

When Altman says achieving AGI needs “a lot of medium-sized breakthroughs” rather than “a big one,” he’s signaling that OpenAI sees a continuous path forward. They’re not waiting for a paradigm shift or fundamental reconception of how AI works.

Instead, they believe scaling current approaches—bigger models, more data, better training techniques, improved reasoning capabilities—will get them there. Whether they’re right is the multi-trillion-dollar question.

3. The Competitive Pressure Is Intense

It’s worth noting that Altman made these comments amid fierce competition. Google’s Gemini, Anthropic’s Claude, and other frontier models are all racing toward similar capabilities. Microsoft has bet billions on OpenAI. xAI recently merged with SpaceX, potentially unlocking massive computational resources.

In this environment, every statement from a CEO carries strategic weight. Claiming you’ve “basically” achieved AGI sends a message to competitors, investors, and talent: we’re winning.

The Anthropic Angle: A Tale of Two AI Companies

Adding another layer to this story, OpenAI recently poached Dylan Scandinaro, a safety researcher from Anthropic, to become their new “head of preparedness.” Scandinaro’s first post in his new role emphasized the urgency: “AI is advancing rapidly. The potential benefits are great—and so are the risks of extreme and even irrecoverable harm.”

Meanwhile, Anthropic—OpenAI’s chief competitor, founded by former OpenAI employees—has been pushing ahead with their own approach. Their recent expansion of the Cowork feature with domain-specific “plugins” represents a different bet: instead of racing toward general intelligence, build more reliable, specialized agentic systems.

The contrast is striking. While Altman flirts with AGI claims, Anthropic is focusing on making AI systems that are useful, reliable, and safe in specific domains. Both approaches have merit, and the winner might be determined more by market adoption than technical achievement.

What This Means for You

If you’re building products or businesses around AI, Altman’s comments—confusing as they are—offer some practical takeaways:

Don’t Wait for AGI

Whether we’ve achieved “spiritual AGI” or we’re years away from “literal AGI” doesn’t really matter for most applications. The AI tools available today are already transformative for:

Waiting for some future breakthrough means missing opportunities available right now.

But Prepare for Rapid Evolution

If OpenAI is right that AGI requires “medium-sized breakthroughs” rather than fundamental shifts, we should expect continuous, rapid improvement in AI capabilities. Building systems that can adapt to new models and capabilities will be crucial.

This means:

Focus on the Boring Stuff

While headlines focus on AGI, the real value is in applying current AI to unsexy problems: data entry, summarization, classification, content moderation, customer support.

These might not make for exciting tweets, but they’re where the actual ROI lives. A chatbot that reliably handles 70% of customer service tickets is worth more than a system that can theoretically do anything but reliably does nothing.

The Bigger Picture: Living in the Liminal Space

What makes Altman’s statement so fascinating—and so confusing—is that it captures a genuine truth about where we are in 2026.

We’re in a liminal space between narrow AI and general AI. Our systems can do an astonishing range of tasks but can’t quite do everything. They’re smart enough to seem intelligent, but limited enough to remind us they’re not.

Whether you call this “spiritual AGI” or “really impressive narrow AI” is almost beside the point. What matters is that the capabilities are here, they’re improving rapidly, and they’re already changing how we work, create, and think.

Altman’s walk-back doesn’t diminish that reality. If anything, it highlights how hard it is to put labels on systems that are fundamentally unlike anything we’ve built before.

Looking Ahead

So has OpenAI achieved AGI? The honest answer is: it depends on your definition, your timeline, and your standards.

But here’s what’s undeniable: AI systems in 2026 can do things that would have been considered science fiction just five years ago. Whether we’re at AGI or just getting started, the trajectory is clear.

The real question isn’t whether we’ve arrived at some arbitrary threshold. It’s what we do with the capabilities we already have.

Because while Sam Altman and other AI leaders debate definitions and make grand claims, the rest of us are living in a world where AI is already general enough to matter.

Maybe that’s the most important insight from this whole episode: we’re so busy arguing about whether we’ve reached AGI that we haven’t fully grasped how transformative AI has already become.


What do you think? Has OpenAI achieved AGI, or is Altman just playing word games? Let us know your thoughts in the comments below.