A System of Agents brings Service-as-Software to life READ MORE

AI’s next frontier is the agentic web

Ideas / Newsletters / AI’s next frontier is the agentic web

05.30.2025 | By: Ashu Garg

What the latest moves by Google, Microsoft, and OpenAI tell us about AI’s future.

A year ago, nearly all the focus in AI circles was on the models themselves: how large they’d grown, which benchmarks they were saturating, and whether their impressive leaps in reasoning capability could continue scaling.

Today, the conversation has shifted. It’s not because models no longer matter (they absolutely still do), but because the critical questions now sit higher up the stack. Consumer experiences, enterprise integrations, developer ecosystems: the debate is no longer just about who has the smartest AI, but whose AI-driven ecosystem will become users’ default.

Last week’s rapid-fire announcements from OpenAI, Microsoft, and Google clarified this shift for me. OpenAI debuted Codex, its engineering agent embedded in ChatGPT, building on its April acquisition of AI coding/IDE startup Windsurf. Later in the week, it upped the ante, announcing its acquisition of Jony Ive’s stealth hardware startup io for $6.5B – a move that positions it to control not just software but the devices consumers use every day and the channels through which that software is distributed.

At its Build conference, Microsoft advanced its vision for an “open agentic web,” positioning Azure AI Foundry as the cloud backbone of choice for building AI-powered apps and backing a range of open standards that promise to reshape the structure of the internet itself. A day later at I/O, Google unveiled over 100 new features and products, including a standalone “AI Mode” in Search – all in an urgent effort to defend its ad-driven search empire and stake out a dominant position in the emerging agentic economy.

To make sense of these moves, I found myself mapping out the competition along four interconnected fronts: productivity apps (spanning consumer, SMB, and enterprise), coding assistants, agentic infrastructure, and raw model capabilities.

In this month’s edition, I explore exactly how Microsoft, Google, and OpenAI are positioning themselves along these dimensions. While there are certainly other important players (notably Amazon and Meta), I’m focusing on these three because their recent moves so vividly clarify what’s at stake. These companies aren’t just fighting for today’s market – they’re laying the foundations for how computing, digital interactions, and business itself will operate for generations to come.

The “AI everywhere” strategy

In the space of productivity apps, Microsoft clearly owns the enterprise market; Google has a legacy lead in consumer, a strong foothold in SMBs, and is pushing hard into enterprise; and OpenAI is racing upward into SMBs/enterprise from consumer – first pressuring Google, and increasingly challenging Microsoft.

Microsoft’s approach is simple: put AI everywhere. Word, Excel, Outlook, Teams – almost every Office app now has a built-in AI Copilot. Microsoft monetizes all this by charging enterprise customers a premium per seat, adding AI revenue on top of its massive user base.

But there’s a catch: even though sales look great on paper (companies are clearly open to experimenting), people aren’t using these tools as much as you’d think. Consumers rarely pick Microsoft first when they’re looking for AI help, and even employees at large companies often find the experience uneven or frustrating. Microsoft’s biggest advantage isn’t really the excitement of users – it’s the decades-long relationships they’ve built with IT departments and CIOs. Enterprises know Microsoft. They trust it, it’s secure, and it integrates easily into existing systems. Even if Microsoft isn’t the most exciting or innovative option, it’s still seen as the safest one.

Google is in a more complex and precarious position: its ad-driven search business is wildly profitable, but it’s also the very thing that AI-driven tools like ChatGPT threaten most. Google’s response has felt chaotic – after months of criticism for moving too slowly, they now seem to be throwing everything at the wall to see what sticks.

Initially caught off guard by ChatGPT, Google is punching back. Last week, it announced over 100 new AI-driven products and features at I/O. Central to their strategy is Gemini 2.5, which is powering a conversational “AI Mode” in search that looks a lot like ChatGPT. This new mode lets users get detailed answers and keep asking questions without clicking on external links – which, notably, directly threatens Google’s core revenue stream of paid clicks.

For now, Google still dominates search volume – according to Semrush data, they’re handling roughly 14B searches a day, about 373x more than ChatGPT. But Google searches tend to be short, single-turn queries, whereas ChatGPT sessions are much deeper, averaging around eight questions per session. Google’s problem here is clear: if people stop clicking links, Google needs a new way to make money. Their answer so far has been to test premium offerings like “AI Ultra,” priced at $249.99 a month – more or less identical to ChatGPT’s premium tier.

Google likes to talk about how its AI tools are more powerful, especially for complex tasks like creating spreadsheets. But it’s still unclear whether this broad, try-everything approach is strategic or just scattered. Google is everywhere, but it’s not obvious yet where it’s clearly winning.

OpenAI, meanwhile, is rising through grassroots momentum. With over 400M weekly active users as of February, ChatGPT has become the definitive consumer AI brand. Sam Altman talks about ChatGPT as something much bigger than a chatbot – he describes it as a kind of “Life OS,” a digital assistant that remembers your entire context and orchestrates all aspects of your digital life, personal and professional.

To bring this expansive vision to life, OpenAI is building out an integrated AI ecosystem – from underlying models to products, custom silicon, and specialized AI hardware. Their acquisition of io underscores the seriousness of OpenAI’s ambitions. The goal isn’t just superior software but end-to-end AI experiences delivered through dedicated hardware and perhaps even its own app-store ecosystem. Still, for all their bravado, their consumer business is burning money and isn’t expected to be cash-flow positive until 2029.

OpenAI’s entry into enterprise markets has also followed a bottom-up, consumer-driven route. Rather than negotiating corporate-wide licenses, ChatGPT is spreading through individual employees who buy premium subscriptions and expense them. This grassroots adoption is quietly pressuring IT departments and company leaders and is slowly eroding both Microsoft’s entrenched position.

Developers & the fight for the IDE

Winning over developers is crucial for every major AI player. Developers shape technical decisions inside companies, build next-generation software (the kind that creates platform lock-in), and represent a uniquely engaged and valuable user base – they spend hours each day interacting deeply with their tools, far more than a casual user might use ChatGPT.

Microsoft had every reason – and every opportunity – to dominate this space. They started early and strong with GitHub Copilot, which quickly became synonymous with AI-assisted coding. By integrating Copilot directly into VS Code (by far the most popular IDE), they positioned themselves to be practically unbeatable. Over the past year, Microsoft upgraded Copilot from a helpful pair programmer to an autonomous coding agent capable of independently tackling complex, end-to-end tasks. They even open-sourced the GitHub Copilot Chat extension to court developer goodwill.

Yet despite all these advantages, Microsoft hasn’t locked down its lead. The explosion of compelling alternatives (like Replit’s Ghostwriter, Cursor, Windsurf, and Anthropic’s Claude) shows that Microsoft has struggled to evolve Copilot fast enough to match developer expectations. These competitors offer simpler interfaces and smoother experiences, highlighting Microsoft’s perennial weakness: its failure to shed its baggage and the constraints of its legacy products.

OpenAI finds itself as both a pioneer and a follower in this space. Its Codex model powered Copilot’s rise, but by partnering closely with Microsoft and GitHub, OpenAI distanced itself from developers. Recently, OpenAI has sharply pivoted to reclaim developer mindshare. Last week, it launched Codex as a software engineering agent directly inside ChatGPT. This came on the heels of its $3B acquisition of Windsurf: a signal of how urgently they want to win back direct developer relationships.

Google, meanwhile, remains powerful in theory but scattered in practice. Gemini Code Assist, Google’s answer to Copilot, is currently free, aiming to attract startups, students, and hobbyists. Google has a clear advantage in its raw technical horsepower paired with unmatched access to open-source code repositories indexed by its search engine. At I/O, Google also introduced Jules, an autonomous coding agent akin to OpenAI’s Codex, now in public beta.

Despite these strengths, Google has struggled to translate its model capabilities into meaningful developer traction. Jules has been well received but has limited awareness and market share so far. The company’s approach still feels fragmented, more “throwing ideas at the wall” than building a coherent and compelling developer ecosystem.

In short, this was Microsoft’s race to win, yet they run the risk of losing ground. Google has all the ingredients to succeed, yet they feel directionless. And OpenAI, despite early missteps, now has a shot at winning back developers.

Architecting the agentic web

In my view, the biggest shift of 2025 isn’t about products or models, but about AI agents – software programs capable of independently planning, acting, and collaborating across digital tools and services. These agents go far beyond chatbots or basic productivity tools; they represent a fundamental reshaping of how computing itself will work. If the last decade’s refrain was “there’s an app for that,” the next could well be, “there’s an agent for that.”

For this agent-driven future to become reality, we’ll need more than smarter models and better chat interfaces. We’ll need shared infrastructure that allows these agents to operate and interact. Just as HTTP and HTML shaped the early web, the standards, protocols, and systems that define how agents communicate will lay the foundations for the next era of computing. Microsoft, Google, and OpenAI each clearly understand this, and each is charting a distinct path.

Microsoft’s approach here is again very clear: drive adoption of Azure. Their bet is that businesses will soon rely heavily on complex, autonomous agent systems. Microsoft wants these workloads – which promise to generate highly profitable compute and inference revenues – to run on Azure.

At Build, Microsoft unveiled Azure AI Foundry Agent Service, a platform that’s explicitly designed for enterprises to easily build and orchestrate their own agentic systems. To encourage adoption, Microsoft is supporting open interoperability standards like Google’s Agent-to-Agent (A2A) protocol and Anthropic’s Model Context Protocol (MCP). Their aim is clear: become the essential, neutral platform that every enterprise relies on to build and deploy agents.

Perhaps even more ambitious is Microsoft’s NLWeb: a new natural-language layer for the internet. Instead of having agents scrape websites or navigate complex APIs, NLWeb allows sites to expose direct, standardized endpoints for natural-language queries. Again, the goal is clear: put Azure right at the heart of an agent-driven web.

Microsoft’s open stance here isn’t altruistic – it’s strategic. The company vividly remembers how the browser displaced Windows as computing’s central interface and wants to ensure that their platform remains essential in this emerging agentic paradigm.

Google introduced A2A and supports MCP – important signals of its support for an open, interoperable agentic web. They’ve articulated a vision of AI-first services, where apps like Gmail, Calendar, and Shopping become autonomous agents in their own right. Imagine your Gmail agent coordinating meeting times with colleagues, or your Shopping agent curating items and negotiating purchases on your behalf.

Clearly, every product team at Google has gotten the “AI memo.” Yet while Google’s models and products are impressive individually, their overall strategy here remains hazy. Google is everywhere at once, but it’s still hard to pinpoint exactly where they’re truly leading (beyond raw model capability) and how their initiatives cohere.

OpenAI, meanwhile, is laser-focused on dominating the consumer market, then moving upward into SMBs and eventually enterprises. Their ambition is to be the new browser: a one-stop gateway to the internet through which all other apps and services run. They began laying the groundwork with ChatGPT’s plugin architecture in 2023, which allowed its models to call APIs. Then came function calling, which turned ChatGPT into a more general-purpose orchestration engine. Add in voice and vision, and OpenAI now has an assistant that can see, speak, and interact with external tools: steps in the evolution of ChatGPT into a robust agent platform.

But OpenAI’s ambitions don’t stop at software. Its acquisition of Ive’s io signals a goal of full vertical integration (model, interface, device, app ecosystem) that mirrors Apple’s playbook. It brings similar risks, including concerns about lock-in and limited interoperability. But the upside is huge. If OpenAI builds the most powerful general-purpose agent, it may not need to conform to others’ standards – it could define the default instead.

The model arms race

Behind all these battles over productivity apps, developer tools, and agentic infrastructure is a relentless race for raw technical superiority.

Right now, the dynamic looks a lot like leapfroggingOpenAI set a new bar with GPT-4; Google now claims the lead with Gemini 2.5 (and is undoubtedly working on Gemini 3); OpenAI will respond in turn. Grok, DeepSeek, and Meta’s Llama models are also firmly in the mix.

This cycle isn’t just about one-upping benchmarks for bragging rights. Enterprises might decide whose AI to use based on which model is state-of-the-art that quarter. Developers might optimize code for the model that can handle the largest context. There’s also perception and PR: being seen as the “AI leader” influences stock prices, talent hiring, and partnerships. OpenAI enjoyed that mantle through GPT-4; Google is making a play for it now.

Microsoft’s approach to raw AI prowess is different, since the company doesn’t (yet) build its own large models from scratch in the same way. Its Azure AI catalog now hosts over 1,900 models, including many open-source ones. This breadth is Microsoft’s strength: it doesn’t necessarily need the best model if it can offer the best buffet.

Ultimately, as I’ve argued repeatedly, the model layer is commoditizing. Open-source models may not be as good as the state of the art, but they’re more than good enough for most tasks. The major players are betting there’s plenty of room left to improve (hence the multi-billion training run), but they’re also hedging by contributing to or supporting open ecosystems. Google has open-sourced model-training frameworks, Microsoft actively supports open initiatives like Hugging Face, and OpenAI plans to release an open-weight model soon.

Summing up

Zooming out, Microsoft, Google, and OpenAI aren’t just launching new products – they’re rewriting the rules of computing itself. We’re witnessing a shift away from standalone AI tools toward integrated systems of agents, where software acts, decides, and even collaborates on our behalf. This transformation touches every layer, from the interfaces we use, to the underlying operating systems, down to the fundamental economics that shape how digital services make money.

If agents become the primary mediators of our digital lives (as now seems likely) the internet economy will change profoundly. Instead of optimizing content and products for human clicks, companies might soon optimize for agent interactions. This isn’t speculative: Walmart’s CTO recently described the emergence of a “robotic shopper SEO,” underscoring how quickly agents could reshape even basic consumer interactions.

The agentic era is here – and the race to define it is well underway.


Published on May 30, 2025
Written by Foundation Capital

Related Stories