Customer Interviews as a Continuous Practice, Not a Sprint
Growth Systems
Most companies treat customer interviews as a quarterly discovery exercise. The teams that build products people love run them every week with a system that surfaces insight without losing nuance.
By Arjun Raghavan, Security & Systems Lead, BIPI · August 13, 2024 · 6 min read
A B2B SaaS founder we work with does five customer interviews a week. Every week. For four years. He's done over a thousand. When his team debates a feature, he can quote a specific customer conversation from three months ago that settled the question. His competitors do quarterly research sprints and wonder why their roadmap feels disconnected from reality.
Why one-off discovery sprints fail
Quarterly research has two problems. First, it concentrates insight at one moment in time, then asks the team to act on it for the next 90 days while the market shifts. Second, it treats research as a department's responsibility, which removes the founder and product leads from direct contact with users. Insight gets filtered through a research deck and loses everything that matters.
The teams we admire treat customer conversations the way investors treat reading. It's a daily input, not an event.
Cadence that actually compounds
- Founder or PM does 3-5 customer calls per week, every week
- Calls are 30 minutes, mix of new prospects, active customers, and churned users
- Recordings transcribed and tagged within 48 hours
- Two-week themes review with product team to surface patterns
- Quarterly meta-review to spot drift in customer language
The cadence matters more than the volume. Fifty interviews compressed into a sprint and then nothing for six months is worse than five per week sustained for a year.
Who runs them and why it's not 'research'
We've seen companies hire excellent user researchers and then watched their PMs lose touch with users entirely. The job of a researcher in a continuous practice is to enable, not replace. They build the question library, train new interviewers, run the synthesis tooling, and handle complex segmentation studies. The people who own product decisions still need to do interviews themselves.
Outsourcing customer conversations to a research function is the same mistake as outsourcing engineering decisions to a contracting firm. You lose the texture that makes the decisions actually good.
Surfacing insight without losing nuance
The hardest part of operationalizing interviews is not collecting them, it's making them useful to a team that wasn't on the call. We use a layered system: full transcripts and recordings stay accessible, two-paragraph human-written summaries get tagged by theme, and a monthly themes document calls out emerging patterns with direct quotes.
The questions that earn their keep
Most interview templates ask too many questions and get shallow answers. We use 6-8 questions max per call, anchored on jobs-to-be-done and switch moments. The goal is to make the customer talk for 25 minutes and the interviewer talk for 5.
- Tell me about the last time you ran into the problem we solve
- What were you using before? Why did you stop?
- Walk me through how you evaluated alternatives
- What almost stopped you from buying us?
- If we shut down tomorrow, what would you do?
- What's the most painful part of using us today?
These questions surface real switching behavior, real friction, and real dependencies. They're more useful than 50 NPS-style multi-choice surveys put together.
What good looks like in 12 months
After a year of weekly interviews, the team should be able to answer roadmap debates by saying 'I talked to three customers last month who all hit this exact problem' instead of 'I think users probably want this'. That shift, from speculation to evidence, is the entire point. Everything else is cargo cult.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.