The Four Lies We Tell Ourselves About AI Interviews (And What Actually Works)

Modern,Voice,Recording,,Hand,Holding,Microphone,Icon,On,Smartphone,Capture

Look, AI-moderated interviews are having a moment. Everyone’s talking about them. Some people are overselling them. Others are writing them off entirely.

And in the middle of all that noise? A lot of researchers are telling themselves stories that just aren’t true.

Let’s clear the air. Here are the four biggest lies we’ve heard about AI interviews and what you should actually be doing instead.

Lie #1: “I Have to Choose Between AI or Human Moderators”

No, you don’t. And thinking this way is costing you time, money, and better insights.

Here’s what’s actually true: the best researchers use both.

One of our agency customers had 40 product concepts to test. Forty. If they’d run human-led IDIs on all of them, they’d still be scheduling interviews when their stakeholders needed answers yesterday. Instead, they used AI to quickly narrow the field to the top five concepts, then brought in human moderators to go deep on the finalists.

AI for volume. Humans for nuance. Done in a fraction of the time.

You can also reverse it: run a handful of exploratory human sessions first to figure out what questions actually matter, then validate those findings at scale with AI across hundreds of respondents.

The truth: AI gives you superpowers to do your job better if you let it.

Three More Things AI Can Do That You’re Probably Not Using It For

  1. Democratize research across your org. When brand and product teams are moving fast and need research faster than an internal insights team can deliver, the right platform and guidance empowers non-researchers to run their own AI-led studies. That way, your insights team stays focused on strategic, high-impact work. Everyone wins.
  2. Run research while you sleep. Literally. AI doesn’t care about time zones or PTO. You can have interviews running 24/7 across the globe.
  3. Go global without hiring a UN translator. Want to talk to consumers in Brazil, Japan, and Germany? You don’t need moderators who speak Portuguese, Japanese, and German. The AI handles it.

Stop limiting yourself to “either/or.” Start thinking “yes, and.”

Lie #2: “The AI Will Just Figure Out What I’m Looking For”

Would you hire a moderator, hand them a one-sentence brief, and say “just wing it”?

No. Because that’s a disaster waiting to happen.

So why would you phone it in with an AI moderator?

The truth: The Activity Goal is where you brief your AI. And if you phone it in, you’ll get lame results.

Here’s what a good brief includes:

  • The research type and audience. Not “consumers.” Try “budget-conscious millennials who’ve tried and abandoned at least two finance apps in the past year.” Specificity matters.
  • Your actual objectives. What are you trying to learn? What’s the KPI you’re trying to move? Don’t make the AI guess.
  • What you’ll do with the insights. Are you shaping a positioning strategy? Prioritizing features? Refining messaging? When the AI knows what decisions are on the line, it can steer the conversation toward what’s useful instead of what’s just interesting.
  • Background context. Give the AI the same pre-read you’d give a human moderator. What does it need to know about your brand, product, or industry to ask smart follow-ups?

Think of it this way: garbage in, garbage out. A lazy brief gets you lazy insights. A thoughtful brief? That’s where the magic starts.

Lie #3: “Writing Questions for AI Is the Same as Writing Them for Humans”

Almost. But not quite.

Here’s the thing: a skilled human moderator can recover from a poorly worded question. They can read the room, adjust on the fly, clarify what you meant.

AI? Not so much. It’s going to ask exactly what you wrote and probe based on exactly what you told it to do.

So your questions and follow ups need to be airtight.

The Three Rules

1. One question. One idea.

Don’t ask: “What are your thoughts on the pricing and the usability of the app?”

You just asked two completely different questions and mashed them into one sentence. The AI will get confused. The respondent will get confused. You’ll get a muddled answer.

Split them:

  • “What do you think about the app’s pricing?”
  • “How would you describe the app’s usability?”

2. Keep your setups short and clear.

A little context is great. A paragraph of preamble? Disaster.

Good: “Think back to your last shopping trip. What stood out to you about the store entrance?”

Bad: “Tell me what you think about the app, and whether you’d keep using it if the features improve, which they might in the next version, depending on feedback.”

Nobody knows what you just asked. Not the respondent. Not the AI. Not even you, probably.

3. Ask truly open-ended questions.

Start with why, what, how, or tell me about. These invite stories, not yes/no answers.

Go exploratory: “Tell me about the last time you tried to stick to a budget.”

Go comparative: “How is this different from other tools you’ve used?”

The truth: AI doesn’t let you get away with lazy question writing. And honestly? That’s a good thing. It’ll make you a better researcher.

Lie #4: “If I Just Ask the Question, the AI Will Know How to Probe”

Here’s where most people leave insights on the table. The AI can probe. But it doesn’t know how you want it to probe unless you tell it.

The truth: Probing instructions are where you turn a surface-level answer into something you can actually use.

How to Do It Right

Tell the AI what you’re hoping to learn.

Don’t just ask the question. Explain the why behind it.

Example:

  • Question: “If you were telling a friend why you use your budgeting app, what would you say?”
  • Probing Instruction: “Capture how participants naturally talk about value. Listen for both emotional and practical language.”

Now the AI isn’t just collecting an answer. It’s listening for something specific.

Give it concrete follow-up directions.

Be explicit:

  • “If the answer is vague, ask for a concrete example.”
  • “Probe on how they chose that option and whether they tried others first.”
  • “Ask them to walk through it step by step.”
  • “Dig into how it made them feel.”

Example:

  • Question: “What tools do you use to manage your budget?”
  • Probing Instruction: “Get the app names and their role in the daily routine. Probe on why they picked that one and if they’ve tried alternatives.”

See the difference? You’re not hoping the AI asks good follow-ups. You’re telling it exactly what good follow-ups look like.

And Don’t Forget: Match Your Probing Depth

Here’s a feature most people don’t know exists: probing depth control.

If your platform gives you the ability to set how many follow-up exchanges the AI will have per question (Discuss’ Interview Agent does), don’t ignore it. It’s one of the most underutilized settings, and it makes a huge difference.

Here’s how to think about it:

  • 1–3 for quick, straightforward questions
  • 4–6 for deeper exploration
  • 6+ for complex, multi-layered probing

If you write detailed probing instructions but set the depth to 2, the AI won’t have room to get through everything you asked it to do. Give it the space it needs.

Not all AI interview tools let you control this, but if yours does, use it. It’s the difference between scratching the surface and actually getting to the good stuff.

The Real Truth About AI Interviews

They’re not magic. They’re not going to replace you. And they’re not going to automatically deliver brilliant insights just because they’re shiny and new.

But if you design them well – if you brief them like you’d brief a human, write tight questions, and tell them how to probe – they’ll give you speed, scale, and reach you’ve never had before.

The teams doing this well aren’t just “trying AI.” They’re rethinking what’s possible when you combine the best of human intuition with the efficiency of AI.

You’ve already got the research instincts. These four truths just help you apply them to a tool that’s ready to work as hard as you do.

Ready to put these into practice? See how Discuss’ Interview Agent works or reach out for a demo. 

Ready to unlock human-centric market insights?

Related Articles

Five people gather around a table in a bright office, discussing work. There are plants and desk lamps in the room.

Holiday Campaigns Meet Agentic AI: How Intelligent Agents Drive Last-Minute Creative Testing

Every marketer knows the feeling — it’s November, the holidays are around the corner, and your campaign calendar is bursting….

Research Reinvented Blog 5 from the Discuss blog series, promoting the Forrester Webinar 2025, with a purple abstract background.

Research Reinvented: Forrester on why researchers won’t be replaced by AI

This is the last article in our five-part series based on the webinar we hosted with Forrester and Quadrant Strategies,…

A promotional graphic for the Discuss blog series titled "Research Reinvented," blog 4, advertising the Forrester Webinar 2025, with a purple abstract background.

Research Reinvented: Scaling global qualitative research without losing the human touch

This is the fourth article in our five-part series based on the webinar we hosted with Forrester and Quadrant Strategies,…