Systems of agents bring Service-as-Software to life READ MORE

How AI agents will redefine market research

08.12.2025 | By: Leo Lu, Joanne Chen

Whether you’re building a CPG brand, a consumer app, or a B2B product, the basics haven’t changed: understand the market, talk to users, ship what works. But traditional market research (the full range of activities companies use to understand their markets, competitors, and customers, which includes user research) is slow, expensive, and episodic. Companies worldwide burn months of productivity and spend $140 billion annually on this work across various categories including in-house researcher salaries, consultant fees, syndicated data providers, research technology platforms, and respondent access. 

But thanks to AI, market research is being completely reimagined.

We believe every one of these categories is on the verge of disruption, with workflows that are about to be totally redefined. The shift has already begun, as companies have started turning to AI agents to handle repetitive, manual tasks like panel recruitment, conducting interviews, and data aggregation. But what feels revolutionary now will culminate in an even bigger paradigm shift: AI-driven, always-on research networks.

Today’s human bottleneck

The constraints of traditional market research become clear when you examine the process: a straightforward user study routinely takes 6-12 weeks and costs $25-65k just to recruit participants, run sessions, and produce executive-ready insights. This isn’t dysfunction: it’s the natural result of manual, human-dependent workflows. On top of this, enterprises juggle around eight distinct vendor categories simultaneously, from full-service agencies like Nielsen to panel providers like Cint to survey platforms like Qualtrics. A 2025 Forrester study found one composite brand was spending $2.6M on traditional agency work annually before switching to automation.

The limitations compound: meaningful user data remains expensive, fragmented, and slow to produce due to sequential human processes. Budgets get scattered across vendor patchworks, keeping projects episodic rather than continuous. Recruitment constraints mean samples stay small and homogeneous. Most critically, insights get diluted through information compression: critical nuances are lost as data hops across tools, agencies, and in-house teams.

The net effect: massive spend for minimal leverage when it’s time to actually make decisions.

The inflection point is now

1. The technology finally works – and it’s getting cheaper daily

Frontier models now reason and respond with human-like empathy, enabling AI agents to conduct interviews that feel genuine and create synthetic personas that mirror real users. Memory frameworks and longer context windows keep conversations adaptive instead of robotic. Voice models have crossed the uncanny valley; they now sound natural and respond quickly enough for real-time research.

2. There is massive underserved demand in cross-functional teams

Here’s the point many people are missing: the biggest opportunity isn’t just replacing existing research approaches – it’s also about unlocking entirely new users. Traditionally, only dedicated research teams could afford the time and cost of market studies. But when AI reduces research costs by orders of magnitude, suddenly product, marketing, CX, and sales teams can integrate continuous feedback into their daily workflows. This shift transforms research from episodic projects into fast-return, iterative loops.

3. The bar for AI adoption is low

Unlike deterministic fields where 99.9% accuracy matters, market research has always been probabilistic. If an AI tool delivers meaningful insights at 80% of current quality, but operates 10× faster and cheaper, it clears the bar easily. This makes companies more likely to implement the technology, lets startups land quickly, and creates rapid feedback loops for iteration.

The evolution of market research

As we look ahead, we’re breaking down the market research stack into three layers: data collection, workflow automation, and output execution. Already, we’re seeing real disruption in the first two.

1. The data collection layer

On the secondary research front, LLMs have made public data trivial to surface – deep-research agents in tools like ChatGPT and Perplexity can now generate tailored, high-fidelity competitive and industry syntheses in minutes instead of weeks. This piece of the data layer is becoming table stakes fast.

But the real action is in primary research. AI-powered interview agents from startups like ListenLabs, Outset, and Conveo can now run the full interview loop: recruiting, scheduling, and conducting sessions. These agents ask open-ended questions, pursue contextual follow-ups, and adapt tone across demographics and cultures. With advanced language, voice, and vision models, they pick up intent signals from phrasing, tone, and facial cues, then steer conversations in real time. 

The time and cost savings are significant: ListenLabs helped Microsoft collect global Copilot feedback in one day (a process that previously took 6-8 weeks) at roughly one-third the cost. Meanwhile, a stealth founder we spoke with is building a platform where AI agents listen, probe, and ask appropriate follow-up questions, using specialized models for each stage. If a theme keeps surfacing during interviews, the agent can draft a micro-survey or recruit a new cohort to validate it within minutes. The result is a rolling research engine that transforms one-off interviews into an always-updating stream of insights, instantly surfacing actionable patterns for teams.

The catch? These systems still miss nuanced responses and subtle hints that experienced human researchers intuitively pursue. They’re powerful but not yet perfect.

The wild card

Another innovative approach on the data collection front is using synthetic personas that replicate real users. These AI agents are seeded with demographic data, behavioral patterns, and preference profiles from actual consumer data. They can interact with each other in simulated environments, respond to marketing stimuli, make purchasing decisions, and evolve their preferences over time, creating dynamic populations that can be observed, experimented with, and queried. 

Synthetic users became feasible fairly recently, thanks to the latest long-context LLMs and persona-alignment toolkits that can both digest rich qualitative data and spin up thousands of low-cost agents on demand. And recent studies have found they are reasonably accurate. A 2024 study by Stanford researchers including Joon Sung Park (who appeared on our “AI in the Real World” podcast) found that long-context LLMs can ingest two-hour interviews and reproduce survey answers with 85% accuracy compared to the original human interviewees. Meanwhile, a more recent study conducted by Viewpoints AI generated nearly 20,000 synthetic respondents and replicated 76% of 133 published marketing study results in just hours. 

But the real unlock, Leo Yeykelis, the founder of Viewpoints AI, told us, is replacing a single $30-60k human study with dozens of rapid AI loops. This pushes enterprises toward a continuous experimentation mindset.

Despite the progress and the promise, current limitations are significant: LLMs tend to reflect WEIRD (Western, Educated, Industrialized, Rich, Democratic) values, which can underrepresent minority perspectives and introduce demographic biases. Synthetic users also exhibit sycophancy (the tendency to provide overly positive feedback) which can miss genuine negative reactions. And since they’re trained on existing data, these personas struggle with truly novel products and generational shifts in behavior, limiting their ability to surface breakthrough insights.

Bottom line: Synthetic personas work well as supplementary data sources but can’t completely replace human insights – yet.

2. The workflow layer

This layer acts as the coordination engine that connects data collection to decision-ready outputs. Historically, this involved tedious, manual steps to move from research inputs to executive-ready findings. But today, AI is dramatically streamlining these workflows and has transformed market and user research from what used to be a project- or one-off based approach into a continuous, always-on research network.

Market research agents scan the open web (news, filings, reviews, forums), normalize signals, and surface what changed. User interview agents run parallel studies across human panels and synthetic cohorts to surface patterns, pressure-test messaging, and size impact. Autonomous analytics agents monitor product and support behavior, detect emerging trends, and flag improvement opportunities in real time. These systems can even tap into previously underutilized proprietary data (like CRM systems, support tickets, and product session replays), creating holistic understanding of the market and user with an unprecedented depth of context.

Tomorrow, research agents will become the shared insights backbone for an org-wide system of agents. Any function’s agents, from product management to marketing, can query or subscribe to their findings, then recommend (or within guardrails, execute) moves so decisions ride on live signals, not quarterly studies.

3. The output layer

Currently, outputs, whether produced by human researchers or AI research agents, are mostly static: slide decks, write-ups, and, at best, dashboards that restate findings. That leaves a gap between insight and decision; it still requires humans to interpret the results and make the call – reprioritize a feature, launch a campaign, or adjust an operating plan.

We expect this to change as tighter bridges form between research agents and other domain expert agents (such as PM, marketing, or strategy). The low-hanging fruit is that these domain agents can leverage the insights to generate execution plans that are materially better – more precise, better scoped, and aligned to live user and market signals.

Beyond that, as the domain agents’ capabilities deepen and teams grant them more autonomy over end-to-end workflows, always-on research agents become a crucial part of the broader system of agents, for which they serve as the feedback engine. Whenever a product, campaign, or operating plan is proposed or tweaked, the research agents can collect responses from real and synthetic users in near real time and feed them back. This transforms market research from an episodic process into one that is ongoing, iterative, and guided by continuous user feedback.

In effect, the gap between “what we learned” and “what we do” closes: research drives execution, execution triggers new research, and the cycle keeps pace with the market and user behavior.

Call for startups

At Foundation, we’re actively seeking founders who see beyond incremental improvements to fundamentally reimagine how companies understand their markets and users. If you’re building the future of market intelligence, we’d love to learn more about your vision. Reach out at jchen@foundationcap.com and llu@foundationcap.com.

Published on August 12, 2025
Written by Leo Lu and Joanne Chen

Related Stories