What happens when AI runs your customer interviews?

What happens when AI runs your customer interviews?

An unexpected experiment with voice AI that changed how we capture customer stories — and a case study from inside a regulated giant.

If you're new here: I'm Leslie Barry, founder of Exponentially. We help enterprises build Innovation Engines that turn ideas into evidence and results. This newsletter is where 3k+ leaders and innovators get practical ideas on Pretotyping and rapid experimentation. Subscribe here.

The AI Interviewer Experiment

Last month, I tried something I wasn't sure would work: I let ChatGPT's voice mode conduct a customer case study interview.

I've done hundreds of these interviews over the years, and I know the patterns: you ask a question, the interviewee starts answering, you're already thinking about your next question, and suddenly you've missed the most important thing they just said. Then you're scrambling to redirect the conversation back to what mattered.

But when I handed the facilitation to ChatGPT, something shifted.

The AI asked one question at a time. It held context perfectly. When someone gave a surface-level answer, I could prompt the AI mid-conversation: "Ask for a concrete example" or "That's interesting, dig deeper there." The AI adapted instantly, without the awkward conversational resets that happen when humans try to course-correct.

The result? A coherent, quote-ready narrative in one sitting. Less "wait, where were we?" and more focused storytelling from the person being interviewed.

Why this matters

Better interviews mean faster case studies, cleaner proof points for executives, and more useful enablement content. The quality of your customer stories directly impacts how quickly you can share proof of value across your organization.

How to try it yourself

  1. Brief the AI on your goal and any guardrails
  2. Ask it to keep single-question pacing (one question, wait for answer, then next)
  3. Stay in the conversation to prompt follow-ups: "ask for a concrete example" or "summarize what they've said and move on"

For the full breakdown of how this experiment worked and the specific prompts I used, check out the write-up here: Case Study — AI‑facilitated Interview (ChatGPT Voice)

Bottom line: Let AI handle the facilitation mechanics. You bring the substance and strategic direction.

Inside a Regulated Enterprise: How One Team Built an Innovation Engine

I can't share the company name, but I can share the playbook.

Inside a multi-billion-dollar, highly regulated organization, a small innovation team faced the classic challenge: how do you move fast when every decision needs approval, when compliance is non-negotiable, and when "just try it" isn't an option?

They didn't ask for permission to move faster. Instead, they partnered with Legal and Risk from day one. They embedded Pretotyping as the method for turning opinion into evidence. And they built safe-to-fail guardrails that could be reused across multiple experiments.

The transformation was remarkable.

What used to be "opinion shipping" — building features based on what stakeholders thought customers wanted — became an evidence-led Innovation Engine. Those retail touchpoint screens that everyone assumed were just "noise"? They became measurable conversion channels. Time from idea to first experiment? Down to about 8 days.

In roughly 12 months, they ran over 130 live experiments.

But the real win wasn't the volume. It was the shift in decision quality. Leaders started trusting the process because experiments consistently answered "Should we build this?" before anyone wasted time on "How do we build this?"

What made it work

Three things stood out:

1. They partnered with Legal and Risk early — not as gatekeepers, but as experiment design partners. Getting permission frameworks upfront meant velocity later.

2. They ran concurrent experiments — patterns across multiple tests improved their selection quality. What worked in one channel informed tests in three others.

3. They prioritized speed over perfection — imperfect experiments that shipped in 8 days beat perfect plans that took months.

Want the full playbook, including specific guardrails and proof points? I've documented the approach here: Tabcorp Anonymous Case Study

The takeaway: Velocity without selection quality is just fast failure. Pretotyping gives you both.

Spotted in the wild:

Three things from this month that you can apply immediately:

1. Time to first experiment beats time to perfect experiment

Stop debating the perfect test design. Ship something in 8 days. Learn from real data. Adjust. Repeat.

2. Partner with Legal and Risk before you need them

The teams that move fastest in regulated environments aren't the ones who avoid compliance — they're the ones who build reusable approval frameworks early.

3. Concurrent experiments compound learning

One test gives you one data point. Three tests running in parallel reveal patterns. Those patterns improve your ability to choose what to build next.

Building experiment velocity in your company? I work with leadership teams to embed rapid experimentation as a core capability, increasing confidence in investing in the right products. Reply if you want to explore what this looks like for your business.

One thing

Running a retrospective this month? Try this: copy your sticky notes into Claude or ChatGPT and prompt: "Group and label these into themes. Output a concise summary with next-step recommendations."

I've been using this after team sessions, and it's a 10x lift in clarity and speed. What used to take 20 minutes of manual clustering now happens in seconds, and the AI often spots patterns I missed.

Tool of the Month

Notion AI I'll be honest: I'm late to the Notion party. While everyone else was already building their second brain here, I was still jumping between tools. But now that I've finally moved my workspace over, Notion AI has become the tool I didn't know I was missing.

What makes it different from ChatGPT or Claude sitting in another tab? It has context across everything in my workspace. Case studies, client notes, past experiments, newsletter drafts—Notion AI can pull from all of it without me having to copy-paste or explain background. I can ask it to summarize patterns across customer conversations, draft content that references specific past work, or help me find that one insight I know I captured somewhere but can't quite remember where.

It's not just faster. It's smarter because it knows what I know. That integration across all your knowledge is what makes it exceptional.

Get Started with Your Innovation Engine

I run complimentary executive briefings and team sessions online, so you can join in wherever your team is based. It's a quick 30-minute session where I introduce Pretotyping, share real case studies, and help you identify which 2-3 experiments to run first with clear pass/fail criteria.

👉 Email leslie@exponentially.com if you'd like to set one up.

More blogs

September 2025: Is Experimentation Dead in the Age of AI?

September 2025: Is Experimentation Dead in the Age of AI? AI accelerates the build, but only experimentation tells you what customers actually want.

July 2025: 12 experiments in 3 weeks. Here's what happens when teams prioritise velocity over perfection

Newsletter: July 2025: 12 experiments in 3 weeks. Here's what happens when teams prioritise velocity over perfection.

AI + Experimentation Resources

Experimentation Tools Directory: 10 Essential Tools for Innovation Teams. From AI assistants and design platforms to market research and analytics tools, covering the complete experimentation lifecycle from ideation to review. Experimentation tools include ChatGPT, Figma, Framer, Synthetic users, Rapidly and Heatseeker.

Set your organisation up to test ideas before you invest. And grow, exponentially.