Software ate the world. Now AI is coming for software. I chart three implications of this paradigm shift. I give a primer on large graphical models (LGMs). And I recap my partner Sid’s SXSW discussion with cybersecurity expert Dmitri Alperovitch.
For well over a decade, I’ve invested behind software as it’s gobbled up nearly every industry. Yet, today, software stands on the brink of being devoured itself. Breakneck advances in AI, achieved through exponential increases in training data and computing power, are priming this technology to consume software.
While traditional software is hard-coded and static, AI-based software is dynamic and continually learns in response to new inputs. Generative AI puts this dynamism into hyperdrive, as it can autonomously create new data and code, thus enabling the software it powers to self-improve. Endowed with net-new generative capabilities, AI-enhanced software will become ever-more omnivorous. As NVIDIA’s CEO, Jensen Huang, presciently put it: “Software is eating the world, but AI is going to eat software.”
What does the AI inflection mean for software? Let’s look at three key implications.
AI will 10x the TAM for software
Imagine that the average enterprise today spends $1,000 per knowledge worker on productivity software. If AI-infused software can double that worker’s productivity, how much will it be worth to their employer?
It’s my view that LLMs and other foundation models will meaningfully boost software spend by creating high-value AI assistants for every profession. Such helpers already exist for artists (DALL-E, Stable Diffusion, and Midjourney), writers (GPT-4), and software developers (GitHub Copilot). More recent verticalized examples include BloombergGPT and BioGPT, which are optimized (respectively) for financial and biomedical NLP tasks. Soon, we’ll see models arise for an even wider range of jobs, such as lawyers, doctors, and architects. In short order, these models will not only represent and generate media (text, images, video, and so on): they’ll also execute entire workflows. Even if we assume that companies will purchase this software at a steep discount to the commensurate rise in workers’ output, AI’s productivity multiplier represents a massive market.
Every software engineer will become an ML engineer.
As pre-trained models abstract the complexities of creating ML models, the lift of building with AI has never been lower. Case in point: software engineers are among the fastest growing user segments of foundation models and LLMs.
In March, we charted the rise of a middleware layer—“Foundation Model Ops,” as we called it—that orchestrates the process of integrating large-scale models into end-user applications. This layer is messy and understandably so, as the rapid release cycle of foundation models creates shifting sands for builders. Moving forward, we’ll see the advent of better low- and no-code tools that smooth this workflow for non-ML specialists.
Wait, don’t we already have an ML stack? We do, but it has blind spots. For starters, many of the 2010s-era ML frameworks were designed for experienced practitioners and require technical knowledge. Others are point solutions that fail to address the full workflow of creating AI-based products; stitching these solutions together can trigger headaches. Still others are becoming outdated as models improve. Most lack robust solutions to core enterprise needs such as integrations with existing IT infrastructure, data privacy and SOC 2 compliance, and private cloud deployment.
It’s important to remember that, for all the recent buzz around AI, only a small number of companies (the OpenAIs, DeepMinds, and Teslas of the world) have effectively operationalized ML. There remains a long tail of non-AI-native companies that are newly excited about ML’s potential to generate business value yet struggle to make sense of today’s MLOps market. It’s here that simpler, enterprise-ready solutions that cater to software engineers can gain ground.
LLMs are just the beginning
The state of art in AI has never advanced more rapidly than it is now. While today’s LLMs boggle the mind, they’re far from the peak of AI’s potential. Consider the continued progress in multimodal AI, which portends a not-too-distant future in which the most effective programming language is human speech.
These innovations in AI will unfold along multiple paths. Take LLMs: while they excel at processing and producing outputs from unstructured data, they fall short when it comes to structured sources. By contrast, large graphical models (LGMs) are able to capture and explicitly model multidimensional relationships between variables. This makes LGMs ideal for enterprise AI use cases—whether demand forecasting in retail, auditing in insurance, or KYC and compliance in financial services—that involve tabular data. To learn more about LGMs, check out this whitepaper from our portfolio company, Ikigai, which is applying these models to enterprise workflows.
Talking Shop with One of the World’s Greatest Hackers
In late March, my partner, Sid Trivedi, our resident cybersecurity expert, took the stage at SXSW with Dmitri Alperovitch, cofounder of CrowdStrike and a foremost authority on all things digital defense. Before a standing-room-only crowd of 500+ attendees, the two had an hour-long conversation that ranged from Russia’s invasion of Ukraine and China’s imminent (according to Dmitri) invasion of Taiwan to the national-security risk posed by TikTok and the grit it takes to build a multibillion dollar company. Their discussion, in which Dmitri revealed some troubling truths about the geopolitical climate and cyber threats, was, in a word, electrifying. Listen in or read the edited transcript here!
Speaking of Crowdstrike…
While we’re on the topic of CrowdStrike, I wanted to resurface an episode of the B2BaCEO podcast with George Kurtz, the company’s CEO and second cofounder. George joins me for an in-depth conversation on everything from hiring, to board composition, to balancing a hybrid product-and-services model. George also gives specific advice for all you security founders reading. Tune in here!
Startup Spotlight: Watchful
As model architectures converge, domain-specific, proprietary data has emerged as the primary accelerant of AI performance. Out-of-the-box LLMs are great for general tasks, but the most impactful AI applications are trained on specialized inputs. These high-value uses of AI center on augmenting subject matter experts, yet these experts are too expensive and pressed for time to manually label the data needed to develop bespoke models.
Enter Watchful, a suite of tools that streamlines the process of augmenting ML models with domain-specific knowledge. Watchful’s automated labeling solution allows companies to create high-quality training sets from unstructured data in a way that is scalable, observable, and explainable. With Watchful, companies can rapidly label large data sets, ensure accuracy and consistency in labeling, and search their data for both inherent and semantic information. Watchful powers use cases across multiple industries, from finance (fraud detection, AML, contract intelligence) to e-commerce (product cataloging, fraud detection, recommendation systems). Get started here!
Sign up for our newsletters
Get the best news and perspectives from our Foundation Capital and B2BaCEO newsletters.