A System of Agents brings Service-as-Software to life READ MORE
07.03.2025 | By: Ashu Garg, Jaya Gupta
A new class of enterprise software company is emerging. Thousands of AI-native startups have launched in the past 18 months, each pitching an intelligent agent to replace human workers: AI SREs, AI SDRs, AI accountants, AI paralegals. These are not workflow accelerators or data entry helpers, but end-to-end systems designed to do the work for you.
We call them Services-as-Software companies, and they’re redefining the structure, value, and go-to-market motion of enterprise software.
Success in Services-as-Software is fundamentally different from success in the SaaS era. Gone are the days where startups could differentiate themselves by having the best features. AI has accelerated the collapse of software primitives. Decades of enterprise UX and domain logic have been compressed into a few modular capabilities: inbox triage, structured data tables, LLM-powered search, workflow routing, and generative response. What used to take months of application-specific design can now be assembled in a weekend.
As a result, nearly every AI product now looks the same. A legal ops system looks like a claims processor. A RevOps copilot looks like an underwriting assistant. When every company can ship the same primitives using the same models, what you build is no longer your moat. How you integrate, embed, and operate becomes the moat.
That shift – from tooling to delivery – is why Services-as-Software companies are outperforming their peers. They deliver outcomes.
Current AI capabilities require significant customization to deliver reliable business outcomes. This is particularly evident in enterprise deployments where data variability is extreme, workflow complexity is high, and performance standards are unforgiving. Consider the difference between a consumer chatbot that provides “helpful” responses and an enterprise legal assistant that must flag compliance risks with 99%+ accuracy.
Over the past year, we’ve studied dozens of Services-as-Software companies and found three consistent patterns that separate companies with real traction from those riding the hype cycle.
In the pre-AI software era, product differentiation came from proprietary code, superior architecture, UX craftsmanship, and feature velocity. Engineering organizations competed on the strength of their frameworks, infrastructure, and ability to ship roadmap items at pace.
In the AI world, differentiation has become even harder. Foundation models – Claude, GPT-4, Gemini – are available to all. Open-source alternatives rapidly close any capability gaps. The cost of building surface-level functionality (autocomplete, classification, summarization) is approaching zero. So how can startups differentiate themselves?
For Services-as-Software companies, differentiation comes from business outcomes. What matters now is how deeply a system embeds into a customer’s operating environment: how well it conforms to their internal workflows, idiosyncratic data structures, domain-specific language, and organizational incentives.
The changing role of the forward-deployed engineer provides a clear example of how this shift is playing out in practice. In the pre-AI era, forward-deployed engineers were often treated as specialized implementation consultants – technical enough to configure systems and customer-facing enough to support go-lives. Now, forward-deployed engineers have become one of the most strategic assets in enterprise AI companies. They:
Harvey is another great example. Dozens of legal AI vendors can extract entities from contracts. Harvey differentiates by using forward-deployed legal engineers who sit inside Am Law 100 firms for weeks to codify how redlines are handled, how clauses are structured, and how decisions are escalated. Those implementations become fine-tuned substrates that become part of Harvey’s deployment framework.
In short, in enterprise AI, integration is not a post-sale activity. It is the product surface.
Traditional enterprise software sales follow a predictable sequence: lead qualification, discovery calls, product demos, technical evaluation, commercial negotiation, and post-sale implementation. This linear process works when customers can evaluate software functionality independently of their specific operational environment.
AI systems have fundamentally broken this model. Because AI performance depends on data quality, workflow integration, and domain-specific tuning, customers can’t meaningfully evaluate these systems without experiencing them in their actual operating environment. The customer now expects to experience functionality, integration, and outcome before a contract is signed.
The market has grown increasingly skeptical as AI companies promise transformative outcomes. Customers have adopted an “I’ll believe it when I see it” mentality, demanding proof of concept with real data before any commitment. Consider a Fortune 500 company evaluating an AI-powered procurement assistant. Traditional demos with clean sample data fail because their actual purchase orders contain legacy formatting, incomplete vendor information, and industry-specific terminology that only surface when processing real transaction volumes.
When AI systems claim to automate entire human roles, the evaluation bar becomes exponentially higher – the AI must handle every exception a human would encounter, from vendor disputes to emergency purchases to policy interpretations.
This has created a cost of sale crisis. AI POCs now require data ingestion, orchestration logic, prompt tuning, and live model validation. Unlike traditional SaaS pilots that take days to configure, AI evaluations demand forward-deployed engineering time, stakeholder alignment, and workflow-specific customization. The cost of evaluating a bad-fit customer is measured in headcount hours, not product clicks.
Moreover, this effort doesn’t end at implementation. AI solutions require ongoing adaptation to remain performant as businesses evolve – new product lines, revenue streams, and workflows all demand continuous tuning and adjustment. The traditional “deploy and maintain” model has become “deploy and continuously re-engineer.”
Churn in this model carries a material cost: when a pilot fails, the vendor forfeits not only anticipated revenue but also the weeks of implementation work required to reach proof-of-concept, and even modest churn can erode margins once the substantial engineering effort and the non-trivial token expenses – far higher than the penny-level costs of traditional software infrastructure – are taken into account.
Leading companies are adapting through strategic classification systems. One healthcare portfolio company distinguishes between “Below the Line” deals with standard workflows and templated integrations versus “Above the Line” deals requiring custom systems that trigger gating processes and manual technical scoping. Harvey co-develops use cases with legal teams during sales so workflows are live and measurable before contracts are signed. Clay equips its sales team with practitioners who do the work while selling the automation.
The companies that master this new model win big. Each successful engagement compounds knowledge through reusable adapters, workflow patterns, and stakeholder frameworks that make future deployments faster and cheaper. Enterprise AI companies must accept the elevated cost of sale with the understanding that the depth of their customer integrations will eventually become their competitive moat. In a world where AI capabilities rapidly commoditize, implementation expertise becomes the lasting differentiator.
The shift to Services-as-Software forces companies to think less about feature gating and more about value alignment. Traditional SaaS pricing was built for a world where software was a tool. In the Services-as-Software model, pricing must reflect that AI is doing work, not just enabling it.
While business models vary, the trendline is moving from access-based to outcome-based pricing. We propose understanding this evolution as spectrum: access-based, usage-based, workflow-based, and outcome-based pricing.
In the new AI world, buyers don’t purchase software; they purchase the outcomes it delivers. Pricing will evolve over time to become more outcome based as application layer companies continue to reimagine and own more and more of the human workflows. The progression toward outcome-based pricing will be messy, with false starts, hybrid tiers, and plenty of margin surprises. The sooner startups internalize that reality, the sturdier (and stickier) their customer relationships will become.
Over the past year, AI has flattened the old hierarchy of “features → workflows → outcomes.” When primitives are free and demos interchangeable, durable advantage shifts to implementation depth: forward-deployed engineers who tame messy data and edge-case workflows; sales motions that blur pre- and post-sales so customers experience real value before they sign; and contracts that evolve from seats to usage to tasks and, ultimately, to shared outcomes.
Chasing “vibe revenue” can juice early sign-ups, but lasting growth comes only from the loop – rapid speed-to-value, compounding operational data, and renewals that swell because the software keeps doing more of the work, more reliably, every quarter.
Why fight so hard to perfect that loop? Because the prize isn’t the familiar $200B SaaS pool; it’s the $4.6T enterprises pour each year into salaries and outsourced services – the very labor-intelligent agents are now poised to absorb. Every startup that masters deep integration and outcome insurance carves off a slice of that multi-trillion-dollar frontier – and each successful deployment becomes cheaper, faster, and harder for competitors to dislodge. The opportunity is vast, and the only currency that matters is the speed with which you can turn promises into provable results.
Published on July 3, 2025
Written by Ashu Garg and Jaya Gupta