Advances in Generative AI have unlocked an explosion of AI use-cases, making shiny-object syndrome a real concern for founders. This challenges founders to filter through a mountain of ideas to identify the use-cases that align with business goals and constraints.
During our Fintech x AI PortCo Summit, David Kang, Co-Founder and COO of Keeper, shared his perspective—one formed by his experience implementing AI into their product. Keeper is a modern tax filing software tailored to small business owners, specifically self-employed freelancers and independent contractors. And unlike many companies that have simply tacked AI onto their product in recent months, Keeper has been leveraging AI since inception in 2019. They’ve used the technology to automatically detect deductions and power personalized assistance.
David shares his team’s approach to testing, validating, and growing AI usage thoughtfully. Their path shows how lean experimentation can reveal where existing solutions fall short while developing the in-house expertise needed to eventually graduate to custom models.
Fintech founders can use this framework to implement in a way that’s thoughtful and pressure-tested for delivering actual value to customers.
Today’s performant LLMs like ChatGPT now remove the need for startups to invest in training custom AI. Instead, startups can likely validate their use-cases with an off-the-shelf model trained on a specific domain with a small dataset. “You may only need 100 examples, not a billion, for a model to effectively handle a domain,” David said.
Keeper successfully applied this “minimum viable” approach to kickstart their conversational tax assistant. By fine-tuning a base GPT model with domain data, tags to delineate reliability levels, and guardrails on riskier content, they created a reasonably accurate experience without reinventing the wheel.
As David noted, this pragmatic start allows faster iteration versus the heavy lifting of developing a proprietary model. Of course, it requires transparency on AI usage and disclaimers around accuracy. But it can unlock significant progress in months, not years.
Innovating responsibly also means testing new AI capabilities directly and openly with end users, early and often. Keeper conducts hundreds of interviews to identify pain points and validate product concepts with their target persona—freelancers and small business owners managing complex tax situations.
User feedback revealed strong demand for an “omni-form” document upload flow, mimicking the pattern of handing your entire financial paper trail to an accountant. By evaluating AI’s ability to parse uploaded PDFs and ask intelligent follow-ups, Keeper can determine where human review is still required before rolling out automated features.
David emphasized that today’s AI excels at focused tasks, like extracting structured data from unstructured documents. But managing an end-to-end process requires understanding AI’s limitations and intelligently looping the human back in where there is an extremely high bar for accuracy. In the case of Keeper, tax-related questions require this extra diligence.
Proactively identifying those limitations and setting expectations up-front with the user ensures customers aren’t surprised when the technology falls short.
As LLMs rapidly advance, David expects AI applications to expand beyond internal operations (like writing code) into more customer-facing product offerings. However, he cautioned startups to stay grounded in solving real user needs instead of shiny new use cases.
Keeper’s roadmap focuses on areas where AI can enhance their users’ experience and outcomes—such as simplifying convoluted tax processes—not chasing the cutting edge. “Right now most are focused on internal use cases,” David said. ”But over time, as people get more comfortable, we may change how users interact with software across industries.”
That said, David emphasized startups should carve out space for controlled experimentation as AI capabilities evolve. By taking an incremental, validated learning approach, founders can ride successive waves of innovation while mitigating risk.
A startup’s biggest advantage is its speed—and that’s true when it comes to implementing AI too. In this case, the competitive landscape might actually create a larger advantage for smaller players.
For large players, there’s a disproportionate risk of public backlash. AI-created mistakes, such as hallucinations, are far riskier for established brands because of their massive social footprint. Imagine TurboTax’s AI making a mistake while preparing your tax return. The risk for industry incumbents is often too great to release a feature before it’s fully-baked, giving the small players a window of opportunity to take these risks, test new ideas, and be first to market.
David stressed that the goal here is not to “move fast and break things” at scale, but rather to run controlled product tests at the cutting edge, and place considered product bets while industry goliaths are deliberating.
Leveraging AI doesn’t require massive teams and budgets—just thoughtful prioritization, user-centric design, and pragmatic delivery timed to the maturity of the technology. As David put it, AI is going to change how everyday people interact with computers. But right now, guardrails and guidelines are important. You can parse use-cases based on the accuracy level needed. Then over time, remove those guardrails.
Keeper’s journey shows how forward-thinking founders can tap into AI’s potential today—experimenting with user-facing features where the big players of industry won’t—while also laying the groundwork to unleash even more transformative applications down the road. These practical insights can help startups take real steps into the AI-enabled future—where the possibilities are vast and shiny, but progress comes one iteration at a time.
A special thanks to David for sharing his expertise and to my partner Steve, who led Foundation’s investment in Keeper.