A System of Agents brings Service-as-Software to life READ MORE

Reinventing Enterprise Software

November 29, 2017
Ashu Garg

These days, people in Silicon Valley throw around the term “artificial intelligence” so wantonly and meaninglessly that they may as well be saying “abracadabra.” I want to make it clear that when I talk about A.I., I’m not talking about silicon-based pixie dust that brings machines to life and grows cash in VC’s gardens. Artificial intelligence is a tool kit —a technology that has matured over decades. In the past decade, costs, capabilities, and talent have finally arrived at a point where it is finally practical to deploy this tool across the tech economy.

I arrived at this viewpoint after more than a decade of experience with machine learning. When I ran Microsoft’s online ads business, we used machine learning to do behavioral targeting. It was prohibitively expensive back then, but the economics of on-line advertising justified one of the largest commercial implementations of machine learning at the time.

Soon after coming to Foundation Capital in 2008, I joined the board of Conviva, which was at the forefront of using machine learning to optimize the video experience. The timing couldn’t have been better—Conviva was about to play an important role in shaping the future of machine learning, and I landed a courtside seat.

Shortly after I joined, Berkeley’s AMP Lab worked with Conviva to jointly develop a new cluster computing system—one that promised to dramatically speed up data analysis. It was called Spark. Given my courtside seat, Spark’s potential was obvious to me. I predicted that all companies would become essentially data companies—and so they would all benefit from this tool. When Databricks was formed to commercialize Spark, I was one of the first investors. Today, almost every machine-learning company uses Spark, and Databricks is the backbone of most machine-learning architectures.

After Conviva, I also invested in Aggregate Knowledge, which applied machine learning to the large data sets that exist in display advertising. Their insight was that the “last touch” attribution models used to measure advertising performance were outdated and the availability of longitudinal data combined with advances in infrastructure and models made it possible to implement “multi-touch attribution,” which takes into account all the ads an individual has been exposed over a 90-day period. Like Conviva, Aggregate Knowledge had to build custom infrastructure (e.g. an event stream processor) to enable data collection and processing.

My experience in martech led me to realize that this concept could be applied to many other spaces. Wherever you have large data sets, coupled with lots of human processing, there’s an opportunity for algorithms to intervene. Until recently, however, the expense of computerized alternatives to a human workforce couldn’t be justified. Companies had to make do with armies of young people rendering their idiosyncratic judgments.

In the past decade, however, the cost of compute and storage have come down by more than one order of magnitude. Open source tools (FOR data processing, model building and deployment and monitoring) have improved tremendously. And, data—the “new electricity” according to Microsoft CEO Satya Nadella—is much more abundant and accessible. These improvements have made machine learning (and more recently deep learning) much more accessible, and this tool kit can now be applied across a much broader class of problems.

Given my investments in Conviva, Databricks, and Aggregate Knowledge, I developed a thesis around the opportunity to apply machine learning across the enterprise stack three years ago. Since then, I have invested in ZeroStack, which is applying machine learning to automate the management of private cloud infrastructure; Localytics, which is applying ML to automate mobile engagement; and Custora, which leverages ML to automate retention marketing, amongst others.

It’s still early days in the application of ML to enterprise software. The volume of data is increasing exponentially and relentlessly—over 90 percent of the data in existence was created in just the past two years! This is the mindboggling world in which businesses find themselves operating today: trying to solve the same problems they were facing 15 years ago—but now at inhuman scale.

Which is why I expect that in the coming years, 80 percent of Foundation’s enterprise investments will be in startups that are attempting to tackle these imponderable challenges by circumventing human limitations altogether. They will start from first principles, asking themselves how, today, they would build a company to address a problem—with minimal dependence on people. Take Opas.ai, a startup I am incubating in our offices, , which is applying deep learning to application performance management. Or QuanticMind, another recent investment of mine, which uses machine learning to optimize media spend.

Some entrepreneurs will create entirely new software markets (e.g. in construction). At the same time, there is an equally large opportunity to reinvent an existing category of software by applying machine learning to an existing problem, often coupling it with a new business model.

I am particularly excited about the opportunity to use machine learning/deep learning to automate functions with large numbers of white collar workers doing relatively repetitive tasks. Four areas that I am focused on these days are: DevOps, HR (especially recruiting), sales and finance. All four share characteristics that make them ideal for such solutions. Namely, each still relies on legions of human data analysts that can be supplemented by more efficient, cost-effective algorithms. By automating the work of human beings, billions of dollars and person hours will be saved; and people resources can be redirected to more complex, nuanced challenges. Just as incandescent light bulbs replaced tallow candles, so, too, will AI replace human effort for rote, data-intensive work.