What a panel with three AI-only competitors taught me about AI research strategy

Succeet Panel Blog Image

By Adam Mertz, Chief Strategy Officer, Discuss

At Succeet in Frankfurt earlier this month, I sat on a panel with three companies that, from the outside, might look a lot like us. Conveo, Bolt Insight, and Tellet are all operating in the AI-moderated research space. The moderator even opened by noting — with good-natured directness — that we all had suspiciously similar marketing language on our websites.

He wasn’t wrong. But by the time the session ended, I think it was pretty clear that we are not building toward the same thing.

The panel was titled “Qual-at-scale — How effective is it really?” and for 45 minutes we covered a lot of useful ground: where AI moderation earns its place, where it doesn’t, how to think about fraud, recruitment, engagement rates. All worth discussing. 

But the most revealing moment came at the very end, when the moderator asked each of us a simple question: how do you plan to differentiate from the others?

I’ll get to my answer in a moment. First, I want to share something one of my co-panelists said.

One of the AI-only founders said the quiet part out loud

Hendrik Van Hove, co-founder of Conveo, gave an honest answer to the differentiation question. He said, in effect, that AI moderation is commoditizing — that the marginal differences between platforms will shrink, and what will ultimately matter is the data and the decisions you can make with it.

He said it more diplomatically than that, but commodity was his word. And I give him credit for saying it, because it’s true — and it’s not a small thing to admit when you’ve built a company entirely around AI moderation.

What it signals is that AI-only entrants are already anticipating a pivot. The moat they’re building today — fast, scalable AI interviews — is eroding before they’ve fully established it. And the direction they’re pivoting toward? Making accumulated research data more useful over time. That’s a real strategic concession, because it acknowledges something the AI market research industry has been reluctant to say directly: the problem was never just speed of data collection. It was always that research dies when the project closes.

The project mentality is the actual problem

When researchers talk about scaling qual, the conversation almost always gravitates toward methodology — how do you get more interviews, faster, cheaper? But that framing misses the deeper issue.

Most organizations are stuck in a loop. A stakeholder asks a question. Someone commissions a study. Weeks pass. A report lands. The project closes. Six months later, a different stakeholder asks a related question, and the whole cycle starts again — because no one can find the previous answer, or it’s buried in a deck that doesn’t map to the new question, or the data isn’t queryable in any useful way. The organization keeps re-buying answers it already owns.

Plain and simple, this is an architecture problem. And it’s the thing that Conveo’s CEO was circling around when he said the future is about data — even if he didn’t frame it quite that way.

Research shouldn’t go dark when a project ends. Every interview, every session, every survey response should feed something that gets smarter. That belief is what separates an end-to-end solution vision from a tool built for a specific method (AI-led interviews), and it’s what I kept coming back to throughout the panel.

Scaling qual isn’t synonymous with AI moderation

This was the point I kept returning to, because the framing of almost every question assumed they were the same thing.

They are not.

When you think about scaling qualitative research, you’re thinking about the entire research life cycle — how you prepare a study, how you execute it, how you synthesize what you find, and how you make those findings useful to everyone who needs them. AI moderation is one powerful tool inside that life cycle. It’s not the life cycle itself.

Take focus groups. Nobody is going to run a focus group with an AI moderator — the whole dynamic of group interaction requires a skilled human in the room (or on screen). But can you use AI to build a tighter discussion guide? To give observers simultaneous translation in real time so a team in London can follow a session running in Japanese? To synthesize and theme the findings afterward, so they feed something forward rather than sitting in a folder? Absolutely. You can scale qual in almost any use case, across almost any methodology, when you’re thinking about AI as a capability across the entire arc — not just as the interviewer.

That’s a fundamentally different product vision than “we run AI interviews.” Curiosity shouldn’t have a queue. And for most research teams right now, it does — because their tools are project-shaped rather than intelligence-shaped.

The fraud question and why human-centered AI is the answer

One of the more interesting moments came from the audience. Someone asked what happens when respondents know they’re talking to an AI — specifically, what stops them from using tools like ChatGPT to generate their answers.

It’s a fair question, and it’s only becoming more relevant.

We’re already seeing signs of it. Responses that feel slightly too polished, pauses that don’t quite match natural conversation, engagement that looks present but isn’t fully there.

This is one of the reasons many of our customers are running mixed-method studies by design — combining AI interviews with a smaller set of human-moderated sessions to cross-check quality and catch divergence. It’s not just a workaround. It’s becoming standard practice because the value of human-led interviews isn’t purely about depth. It’s also about verification. 

When you’re talking to someone face to face, the quality of that signal is different. The accountability is real. That’s human-centered AI in practice: technology that extends what human researchers can do rather than quietly cutting them out of the equation.

The other panelists had thoughtful approaches to fraud detection like video and response tempo analysis, dual-agent observation systems, quality scoring. All worth knowing about. But the broader point stands: fraud in AI-moderated research is an emerging structural problem, not an edge case. Any AI research strategy that doesn’t account for it — and that doesn’t preserve human-to-human interaction as part of the mix — is incomplete.

Why I asked for a show of hands

When it was my turn to answer the differentiation question, I didn’t lead with a product feature or a roadmap item. I asked the audience a question instead.

“Raise your hand if 100% of your qualitative research is going to be done via AI interviews this year or next.”

One hand went up. Out of roughly a hundred people.

That was my point. The premise that AI moderation is the future of qualitative research — full stop — is one that almost nobody in that room actually believed. Most researchers are running mixed method. Most always will be. Focus groups, IDIs, online communities, AI interviews, quant — these aren’t competing formats. They’re different tools for different questions, and a platform that only handles one of them is asking researchers to stitch together the rest on their own.

What Discuss is building is a platform with the breadth to support all of it: quant and qual, AI-moderated and human-moderated, whatever the project demands. And then — the part that I think gets undersold — to take all of that research data and turn it into a living, compounding asset rather than a collection of reports that retire the moment they’re delivered. Not a better archive. An intelligence layer. One where every study makes the next one smarter, and where the organization’s understanding of its customers grows rather than resets.

The Forrester Wave Q1 2026 Experience Research Platforms report reflects exactly this direction. Being named a Leader in that evaluation matters much more to Discuss than the badge itself, because the criteria Forrester applied map onto where the industry is actually heading: coverage across the full research life cycle, AI that goes beyond moderation, and the ability to compound accumulated research into something an entire organization can access and act on.

What the HelloFresh story has to do with this

I spoke at another session at Succeet alongside Jo Lindenberg from HelloFresh, where she walked through exactly what this looks like in practice. Her team went from running occasional in-home ethnography with a small crew to running weekly AI-moderated sessions across ten countries with 18 respondents each. But the more meaningful shift wasn’t the cadence. It was what they’re doing with everything they’re accumulating.

Every study feeds the next one. The research doesn’t retire. The organization now has a continuously learning intelligence layer rather than a growing archive of decks that nobody can find when they need them. Brand managers can interact with virtual personas built from years of real consumer conversations rather than commissioning a new study every time a question arises.

That session is the subject of a companion blog to be published soon. I encourage you to read it alongside this one, because it puts the strategic argument from the panel into concrete operational terms. The shift HelloFresh made — from research as a project to research as a memory — is a working example of what an AI research strategy looks like when it’s built to compound rather than repeat.

What I left Frankfurt thinking about

There’s a version of AI research strategy that’s mostly about speed and cost. Run more interviews, faster, cheaper. That version is real, and it has genuine value. But it’s also exactly the version that gets commoditized — quickly and predictably, as one of my co-panelists acknowledged on stage.

The version that compounds, that gets more valuable over time, that changes how an entire organization relates to its customers — that requires something more. It requires a platform designed for the full research life cycle, not just one part of it. It requires the ability to blend methods to match the question. And it requires a commitment to the human side of insight: not as a philosophical stance, but as a practical recognition that some questions can only be answered in a conversation where both sides know a real person is listening.

Research that sleeps in folders isn’t research anymore. It’s overhead. The goal is a system where curiosity doesn’t have a waiting list — where any question gets an answer rooted in real human understanding, and where the knowledge your organization has built up over years of talking to customers doesn’t disappear at the end of a billing cycle.

That’s the bet we’re making. The Succeet panel reinforced for me that it’s the right one — and that the window for making it is narrower than it might look from the outside.

Download a complimentary copy of the Forrester Wave™: Experience Research Platforms, Q1 2026 to see how Discuss was evaluated. Want to see how this AI research strategy translates to your own research program? Talk to our team.

Ready to unlock human-centric market insights?

Related Articles

Lessons from a packed room in Frankfurt about AI research strategy

By Adam Mertz, Chief Strategy Officer, Discuss Recently, our team was in Frankfurt for Succeet, one of the leading market…

Human-in-the-Loop Isn’t a Safeguard—It’s a Competitive Advantage

For a while, “human-in-the-loop” has sounded like the corporate equivalent of a seatbelt. Necessary.Responsible. A compliance checkbox. It’s often described…

From Signals to Stories: Why AI Needs Human Researchers to Turn Patterns Into Meaning

If you’ve spent any time looking at AI-generated research dashboards lately, you’ve probably had this moment: The themes look clean….