Get insights directly to your inbox.

Narrative power in the AI era

11.26.2025 | By: Ashu Garg

AI is both a technology and a story we’re telling.

I’ve been working in AI since 2006. Back then, we were using it to build ad recommendation and speech recognition systems. Today, no one thinks of those things as AI anymore. They’re just… software.

AI has always been a moving target. As AI pioneer John McCarthy put it, “As soon as it works, no one calls it AI anymore.” Once chess-playing computers, real-time traffic prediction, and spam filtering became reliable, they stopped being AI – they became plain old technology. McCarthy called this the “AI effect“: AI is continually redefined as whatever machines can’t yet do.

Over the past year, we watched AI’s goalposts move in real time. We’ve gone from being amazed that AI can generate a polished paragraph to wondering when it can write a novel; from surprise that it can summarize a dense research paper to asking when it will make original scientific discoveries; from marvel that it can write working code to anticipating when it will build entire software companies. AI’s magic at first wows, then becomes routine. AI’s frontier recedes as fast as we approach it.

This year, the goalposts moved again – only this time, because we’d actually reached them. Across a broad range of cognitive tasks, AI systems crossed the threshold of human-level performance. We blew past AGI and immediately set our sights on a new, more ambitious goal: superintelligence.

The speed of this shift is worth pausing on. Over the past few months, there’s been ample debate over whether we’re living through an AI bubble, fueled by skepticism over the staggering amounts of capital flowing into GPUs and data centers. But how the AI conversation itself has changed – the way we moved from AGI to superintelligence practically overnight – has received very little attention.

I think this misses a key dynamic of our current AI moment. The truth is, AI is both a technology and a story we tell. The two are inseparable. The story doesn’t just follow the technology – it actively shapes what the technology becomes. Narrative, capability, and capital form a self-reinforcing flywheel, with each element pulling the others forward.

Understanding how that flywheel works explains not just where we are now, but where this all could be heading.

Crossing the threshold

In 2023 and 2024, AGI dominated the conversation. I wrote an edition of this Substack about the term’s slipperiness. OpenAI had centered AGI in its mission years earlier in its 2018 charter, which pledged “to ensure that artificial general intelligence (AGI) benefits all of humanity.” From then on, Sam Altman continued to cast AGI as the company’s north star. This built to a crescendo a year ago, when Altman declared “we are now confident we know how to build AGI“ and predicted that the first AI agents would “join the workforce” in 2025.

But even as the confidence rose, AGI’s definition stayed frustratingly vague. OpenAI’s charter described AGI as “highly autonomous systems that outperform humans at most economically valuable work.” But outperform which humans – the average or the expert? In which domains – every domain or just most?

Then, this year, the evidence of AGI mounted. Across image classification, reading comprehension, language understanding, visual reasoning, and competition-level mathematics, frontier AI systems converged at or above 100% of the human baseline. In late July, OpenAI and Google announced that their systems had achieved gold-medal performance on the International Math Olympiad – a feat requiring extended, multi-step proofs that had seemed out of reach just months earlier.

By mid-year, by any reasonable definition of AGI – systems that match or exceed human capability across a broad range of cognitive tasks – we had arrived. Human-level performance, once the ultimate aspiration, began to feel like yesterday’s goal.

Source: 2025 AI Index Report, Stanford HAI

This created a narrative problem. If human-level performance was supposed to be the target, and we’d reached it, what came next?

The answer was to aim higher. Suddenly, both leading labs and big tech incumbents began talking not about AGI but about superintelligence: AI systems that surpass human cognitive abilities.

Meta made the shift concrete in June when it invested $15B for a 49% stake in Scale AI, effectively acquihiring CEO Alexandr Wang to lead a new division named the “Superintelligence Lab.” This framing served as both a justification for Meta’s massive CapEx on AI and a recruiting strategy. (A superintelligence lab is a far more compelling rallying cry for top researchers than an ad-optimization effort.)

OpenAI followed quickly. In his June essay “The Gentle Singularity,” Altman argued that 2025’s AI systems were already “smarter than people in many ways” and that “the least‑likely part of the work is behind us.” The path to recursive self‑improvement – AI systems doing AI research, improving themselves, and accelerating scientific discovery – was, in his telling, largely solved. “OpenAI is a lot of things now, but before anything else, we are a superintelligence research company,” he asserted. The central question flipped from “can we create AGI?” to “how far beyond human intelligence can we go?”

The term AGI carried particular weight for OpenAI. Its 2019 partnership with Microsoft included a trigger tied to declaring AGI, after which Microsoft’s “pre-AGI” licensing and IP rights could shift. As scrutiny around what counted as AGI increased, the fuzziness that once made the term useful in marketing became a contractual liability. Reframing the destination as superintelligence avoided those AGI tripwires and aligned the company’s narrative, research agenda, and long-term product roadmap around a more open-ended goal.

The shift to superintelligence did what great narratives in technology cycles do: it expanded the frame, creating a new horizon to organize ambition, talent, and investment. And unlike AGI, which at least theoretically had a finish line (human-level performance), superintelligence has no upper bound. There’s always another level beyond. It’s an aspirational goal that can justify indefinite investment – which is precisely what makes it so powerful.

When the story becomes the strategy

This pattern – the way narrative shapes investment, which shapes capability, which fuels new narratives – is playing out most dramatically at OpenAI.

Last year, I was quite negative on OpenAI. I didn’t believe they could maintain a model lead. In some ways, I was right. Commoditization came fast. Anthropic and Google have narrowed the gap. Open-source models have proliferated. Their technical advantage has not held.

But I was also wrong. I underestimated OpenAI’s ability to build products and the power of its brand, which is also its narrative. ChatGPT is becoming the interface to the web for millions of users.

The way OpenAI has orchestrated its growth reveals something important about how narrative, capital, and capability interact in technology cycles. What looks from the outside like a single unstoppable flywheel is, in practice, a set of tightly coupled loops that reinforce one another.

As long as all three loops stay in sync, the flywheel keeps accelerating. But if any one slips, the flywheel loses momentum. The spin slows, and what looked unstoppable can suddenly feel precarious.

Right now, OpenAI’s loops are remarkably in sync. The storytelling is ambitious but grounded in real capability gains. The capital is flowing. The product keeps getting better. Whether they can sustain this remains the central question.

The economics of narrative

“Superintelligence” does work that AGI can’t. It signals that the real breakthrough – the one big enough to justify today’s valuations and tomorrow’s spend – is still ahead. It keeps urgency high. Maintaining momentum means continually raising the ceiling on what’s possible – which is why a story with no upper bound is so strategically essential.

The tech ecosystem has rallied around this new narrative with remarkable speed. In a field that’s tightly networked, shared signals spread quickly, informing what CEOs say onstage, what investors write to LPs, and what gets amplified on podcasts and conference stages. Once the new narrative takes hold, it shapes what OpenAI builds next, the claims startups make, and how public markets value the “Mag Seven.” The story doesn’t merely follow reality – it constructs it.

The idea that narratives shape our economy isn’t new, though it’s often overlooked. Nobel laureate Robert Shiller’s concept of “narrative economics“ describes stories as viral, epidemic-like forces that move through populations and influence economic behavior at scale. As he explained, “people live their lives as fulfilling a story… It becomes the meaning of life – how I fit into some story of our time.”

These shared narratives shape our motivations at the deepest levels. They define what problems we focus on and how urgently we pursue them. They give founders, investors, and researchers the language – and the authority – to operate inside new frames of possibility. They expand what founders feel they have permission to attempt.

In venture, narrative momentum and investing momentum are two sides of the same coin. Every early-stage startup is, at its core, a story. A compelling narrative doesn’t just attract capital – it pulls in talent, partners, customers, and the legitimacy needed to fuel growth. When the story changes, the flow of capital changes with it. When capital flows differently, technology evolves in new directions, which changes the story. The flywheel spins on: story → capital → capability → stronger story.

History bears this out. Economic booms are propelled by grand visions of the future. During the dot-com era, investors weren’t merely chasing returns – they were chasing a belief. The narrative momentum built rapidly: the more people believed the story, the more capital flooded in, which in turn validated and amplified the story.

Crucially, a popped bubble doesn’t prove the story was false. Often, it was just early. The dot-com crash wiped out countless startups, yet it paved the way for Google and Amazon to realize the vision of the internet on a longer timeline. In the years since, these companies have created over $5T in market cap. The story came true, just not on investors’ initial schedule.

The future is the story we tell about it

Technological revolutions rely on more than just capital and code – they run on stories. Unlike earlier debates around the meaning of AGI, the shift to superintelligence isn’t semantic. AGI implied we were trying to match human intelligence. Superintelligence implies we are trying to surpass it – to build a new species of intelligence altogether.

AI’s advances in 2025 didn’t just make the superintelligence narrative plausible – they made it necessary. Once AI systems matched or exceeded human performance, the only direction left was beyond. The story had to change because the technology changed. And once the story changed, it pulled the technology forward with it.

AI leaders deeply understand this dynamic. It’s why we’ve seen a wave of AI manifestos over the past two years – from Mark Zuckerberg, Dario Amodei, and others. These aren’t product announcements. They’re efforts to write the stories that determine how those products evolve and what resources flow toward building them.

Altman may be the most skilled at this. His ability to articulate a compelling, expansive vision is central to how he rallies teams and why he’s proven to be the most successful startup fundraiser of our generation. In his “Gentle Singularity“ essay from June, he captured what makes this moment so disorienting:

Altman is describing the mismatch at the heart of our AI moment. Daily life feels very familiar. We have dinner with our families, sit in the same traffic, swim in the same lakes. At the same time, each breakthrough in AI moves us into stranger and stranger territory. This strangeness is quickly assimilated, becoming part of what we accept as normal.

As we look toward 2026, one thing we can count on is that the goalposts will continue to move. In AI, they always do.


Published on 12.26.2025
Written by Ashu Garg

Related Stories

Get insights directly to your inbox

This field is for validation purposes and should be left unchanged.
Select your preferences