Since the start of the year, a common question has come up in our conversations with portfolio companies: “I know that I should be using AI, but how do I get started, and where should I focus my efforts?” To address this, we organized our first “Fintech x AI Portco summit” this summer. The event allowed our founders and operators to share their firsthand experiences with AI, from traditional predictive models to emerging generative ones. Our aim was to cut through the hype and focus on how AI can deliver long-term business value.
Predictive AI models have been foundational in financial services since the early 2000s. They support critical functions like risk modeling, revenue forecasting, portfolio optimization, and fraud detection. Then, last November, ChatGPT introduced the world to generative AI, a novel AI paradigm based on the transformer model architecture. Unlike predictive models, which analyze and act on tabular, structured data, generative models excel at open-ended, creative tasks that draw on unstructured inputs. Far from mutually exclusive, these two types of AI are complementary. Forward-thinking fintechs will embrace both.
Many have opined on how generative AI will benefit incumbents, given their accumulated data, established customer bases, and deep budgets. Yet startups have distinct advantages. They can offer and distribute services at a lower cost, target specialized or niche markets, and outpace competitors who are slow to adopt new technologies. In addition, while incumbents may struggle to integrate AI into legacy systems, startups can create products that are designed with AI at their core from the outset. Given the relative ease of experimenting with generative AI, we see ample white space where startups can innovate.
In this post, we’ve gathered insights from six of our summit speakers. We start with an overview of the state of AI adoption among incumbents, then move to concrete advice for founders. Topics covered include identifying high-impact AI applications, designing user-friendly interfaces for AI systems, effectively using AI in GTM strategies, and navigating the unique challenges introduced by generative models. Let’s jump in!
Alexandra, the CEO of Evident, presented learnings from the company’s first-ever AI Index. This study takes an “outside-in” approach to analyze AI adoption among the 23 largest banks in North America and Europe, all of which manage over $1 trillion in assets. The goal of Evident’s Index is to benchmark AI maturity in the banking sector and gather best practices. In doing so, the report provides banks with a clear framework to build their AI strategies.
The Index evaluates banks’ AI maturity across four key pillars: talent recruitment and development, innovation (which includes R&D, patent activity, investments, acquisitions of AI-first companies, academic partnerships, and engagement with the open-source community), executive leadership, and transparency around AI use cases and outcomes. The Index draws on a range of data sources, including company disclosures, executive interviews, LinkedIn, Crunchbase, GitHub, Google Patent, Google Scholar, arXiv, and academic conference websites, among others.
The results reveal that most advanced banks are quickly becoming AI-first businesses. The top 10 spots are primarily held by North American banks with long-standing AI strategies. J.P. Morgan leads by a wide margin, followed by the Royal Bank of Canada and Citigroup. Across the board, banks are exploring the use of generative AI in various areas including wealth management, software engineering, customer service, and product development.
Alexandra pointed out that a gap is forming between these front runners and the banks that have been slower to adopt AI strategies. Many banks are underestimating the changes required to stay competitive in an AI-driven future. The intense competition for skilled talent, along with the inertia created by legacy systems, threatens to widen this gap.
Alexandra closed by highlighting the challenges banks encounter in adopting AI, such as the “black box” nature of AI algorithms and the emergence of new risks related to data bias, privacy, and security. Nurturing technical talent, promoting collaboration across different departments (including engineering, business development, product, design, and legal), and transparently measuring the impact of AI on business outcomes will help encourage responsible adoption. So too will updating data and technology infrastructure, educating executives about AI, and proactively addressing privacy, security, and ethical concerns.
Luis, CEO of the insurtech startup Agentero, shared insights on how startups can begin exploring AI, using Agentero’s experience as a case study. The high-level takeaways from his talk: introduce AI through hands-on learning, incorporate AI into the company’s day-to-day culture, and align AI initiatives with core business objectives.
Agentero kicked off their AI journey with an internal hackathon. The first step was to assess employees’ existing AI knowledge through a survey and introduce them to fundamental AI concepts. The team was encouraged to take Andrew Ng’s “AI for Everyone” course, which provides a foundation in using basic AI tools. Armed with this knowledge, employees independently brainstormed ways AI could benefit Agentero. They then formed cross-functional teams, including members from engineering and sales, to further develop these ideas.
To sustain momentum after the hackathon, Luis focused on making AI a core part of Agentero’s culture. He set up a dedicated AI Slack channel where employees could exchange ideas and showcase their use cases for AI tools. A company-wide ChatGPT account provided a further window into effective prompts and diverse use cases for AI, while regular all-hands meetings spotlighted new AI applications. In parallel, Luis ensured that clear guardrails around data privacy were in place.
These efforts coalesced into a company-wide AI strategy for Agentero, anchored in three core principles based on effort, impact, and projected ROI:
Omri, CEO and co-founder of Lama AI, shed light on how his team is using LLMs to disrupt lending. Lama AI helps financial institutions, fintech companies, and other SMB-focused organizations to launch and scale small business lending products through its AI-powered, API-first technology. More specifically, they’re leveraging generative AI to address a key challenge in small business lending: processing vast amounts of unstructured data such as financial statements, business plans, and cash flow projections.
Traditionally, lending institutions have had to invest significant time, resources, and labor into cleaning and categorizing this data before feeding it into their underwriting models. LLMs, with their ability to directly process and act on unstructured data, allow Lama AI to bypass this costly preprocessing step. Lama AI is also able to continually train its LLMs with additional, domain-specific data to ensure data accuracy and better meet customer needs.
At Lama AI, LLMs have two main use cases:
Turning to the challenges of building products using LLMs, Omri cautioned fellow founders about the potential for the models to hallucinate, or generate inaccurate information. To counteract this and other quality risks, Lama AI’s models are fine-tuned with verified data and user input and are further validated across specialized models and agents. These methods also address bias, privacy, and security concerns.
Trey, CEO and co-founder of Tennr, introduced a practical framework to help fintechs identify the best uses of generative AI tools. He drew on the countless customer conversations that he’s had while building Tennr, which offers businesses an easy way to use custom AI models in complex workflow automations.
He began by dividing potential generative AI use cases into two main areas: internal workflows and customer-facing experiences. For internal workflows, the first consideration is whether the process is contained within a single system or spans multiple systems. Processes that operate within a single system like Salesforce can make use of that system’s built-in AI capabilities. For more complex, multi-system workflows, businesses can consider solutions like Tennr. Tennr uses LLMs’ reasoning capabilities to connect APIs, documents, and data across different tools. As one example use case, Tennr can review client documents, update Salesforce records, and send follow-up emails asking for clarification.
For customer-facing experiences, the first question to ask is if the interaction involves a chatbot. Simple chatbots can easily be integrated into products using OpenAI in combination with orchestration platforms like LangChain. For more advanced conversational experiences that require domain-specific knowledge, platforms like Lamini allow businesses to train customized LLMs.
For non-chatbot customer experiences, the right approach depends on how tailored the LLMs’ capabilities need to be. Generic NLP features, such as translating natural language into SQL, are available through products like defog.ai. For more specialized needs, like document summarization, synthesis, and search, a combination of tools (like the aforementioned pairing of OpenAI and LangChain) can help. For even more complex needs, reach out to Trey!
David, co-founder and COO of Keeper Tax, focused on UI and design considerations for AI products. Keeper Tax is an innovative tax-filing software designed for U.S.-based freelancers with 1099 income. It’s built from the ground up around AI capabilities such as automatic deduction scanning, which reviews users’ credit card and bank statements to identify deductions without manual input.
David began with an overview of how AI is transforming the ways that everyday people interact with computers. He traced the evolution of UI paradigms from batch processing, where only technical users could operate computers, to graphical interfaces and command-based interactions, which ushered in the era of personal computing and the modern internet. AI-based interfaces represent the next evolution of this paradigm, where users specify their desired outcome, and the AI model anticipates their needs based on historical data and interaction patterns.
Building atop this third paradigm, Keeper Tax aims to markedly improve the tax filing process for freelancers. Remarkably, over 50% of tax returns are still processed by human accountants every year. This is because, when any complexity arises, the onus is on filers to navigate it and ensure the accuracy of their returns. Keeper Tax aims to recreate this experience using generative AI. Users compile their tax forms, and an intelligent LLM-powered chatbot offers real-time, personalized advice, just like an expert accountant. To ensure accuracy and safety, Keeper Tax tags data by accuracy needs and incorporates human review in high-risk scenarios.
When it comes to incumbents, David noted that established players like TurboTax maybe hesitant to adopt new technologies due to concerns about revenue and reputation. By contrast, startups like Keeper Tax benefit from agility and a smaller user base, which makes it easier to implement innovative features and maintain human-in-the-loop processes for complex cases. In addition, while incumbents have accumulated large amounts of proprietary data, LLMs like those used by Keeper Tax can learn efficiently through few-shot prompting.
In closing, David emphasized the importance of branding AI products. Currently, the Keeper team is considering whether to position the software as “AI taxes” and is experimenting to find the optimal approach. Determining the right level of transparency about AI involvement is crucial to building user trust.
Andrew’s talk turned to the use cases of AI for GTM. Andrew was the first data scientist at NextRoll and has been leading ML efforts at this marketing technology platform for over 11 years. NextRoll’s tech stack includes a variety of ML models, including generalized linear models for ad pricing, recommender systems for personalizing ads to user preferences, LLMs for tasks like text classification and information extraction, and external vendors like ChatGPT.
Andrew began with an overview of NextRoll’s flagship products, highlighting the application of ML in each. BidIQ, their ad bidding engine, uncovers statistically significant patterns undetectable by humans, allowing marketers to refine ad delivery. Dynamic Creative uses ML to test variations in ad components, such as images, CTAs, and copy, to identify what engages viewers the most. Generative Emails leverage ChatGPT to craft personalized emails using CRM data. Last up, Account News feeds RSS content into an LLM to classify and extract relevant company news for sales teams.
Andrew then shared four guiding principles for designing effective ML products:
In closing, Andrew underscored the ongoing need to keep humans in the loop. While AI can create helpful and grammatically correct content, it should serve as a basis for sparking human creativity and generating ideas, rather than as a completely independent, end-to-end solution.
We hope that this post serves as the starting point for an ongoing conversation. If you’re working at the intersection of fintech and AI, we’d love to hear from you! Connect with us at cmoldow [at] foundationcap.com and nstainfeld [at] foundationcap.com.
Published on 10.09.2023
Written by Charles Moldow and Nico Stainfeld