User Interviews Are Not for Features
The Common Mistake
Most user interviews are structured around a question: what do you want us to build?
You ask what's frustrating, what's missing, what they'd pay for. You leave with a list. You build the list.
This is almost always a mistake.
Not because users lie — they don't, mostly. But because users can't accurately describe what they need. They describe symptoms. They describe what they think they want, which often has a better solution they haven't imagined. The more directly you ask about features, the less useful the information.
What I Actually Do
At SAME, I'd email every user who'd been consistent — not a representative sample, not churned users, just the ones who kept coming back. I invited them to talk. Not to "provide feedback." Just to talk.
In those conversations, I wasn't trying to collect feature requests. I was trying to understand the person.
What do they care about in their life? What does "getting something done" feel like to them? What actually counts as understanding something, versus just having covered it? Are they trying to genuinely learn, or are they trying to survive the exam?
The goal was to build an internal model of who I was designing for — specific enough that I could close my eyes and imagine them at 11pm before a deadline, or halfway through something that wasn't clicking. That model is what drives good feature decisions. Not the list.
Who You Should Talk To
Two cohorts, in order of value:
Heavy, consistent users. They've had enough experience to develop opinions that matter. They also want to help — they've invested enough to care whether the product gets better. Both things make the conversation easy.
New users who formed habits fast. Someone who signed up two weeks ago and is already deeply integrated tells you something about the product's natural pull. What drew them in? What did they figure out on their own? Where did they get stuck?
Cold users, churned users, prospective users — less useful, at least for the specific question of what to build next.
Session Replays Over Interviews, for Functional Questions
If I want to know how people use a feature — whether they find it, whether it makes sense — I look at session replays. Prompt history. The actual record of behavior.
Asking someone to describe how they use something is like asking them to narrate walking. They'll give you a cleaned-up version that skips the hesitation, the backtracking, the accidental clicks. The replay shows you all of that.
Functional validation belongs in the data. The interview is for the stuff the data can't tell you — why they care at all, what they're trying to accomplish in a larger sense, what "good" actually looks like to them.
When to Do It
My view on timing has changed.
For a long time I thought user interviews should happen before building — gather signal, then build. The problem: before you've built anything, you're asking users to evaluate an abstraction. Their mental model of your idea and what you'll actually ship are too far apart for the feedback to be reliable.
Now I think: build first, at least something minimal. Then bring it to the people you know best — your consistent users — and watch what happens. The speed of building today makes this practical in a way it wasn't two years ago. You can ship a rough version in a week. That's faster than a proper research cycle used to take.
Pre-ship interviews still have value, but for a different question: not "is this feature right?" but "help me understand this person's world well enough that I'll know when I'm on the right track."
What This Is Not
This isn't "don't talk to users." Talk to them constantly. Build real relationships with the people who use what you make.
It's that the purpose of talking to users isn't to collect feature requirements. It's to develop the internal model that makes your feature judgment good. The features should come from you — from understanding users so well that you can think on their behalf. Not from transcribing what they said they wanted.
The best product decisions I've made weren't ones where a user asked for something. They were ones where I knew a user well enough to know they needed something before they said it.
Devil's Advocate
"But users often DO know what they want." Sometimes. The trick is telling the difference between a user describing a real need clearly versus rationalizing a symptom. The only way to tell is understanding the underlying context — which brings you back to the model-building approach anyway.
"This takes a lot of time per user." Yes. That's the point. Ten deep conversations beat fifty shallow ones. The value compounds — each conversation adds to the model, and the model gets more accurate over time.
"Session replays aren't always available." True, especially early. But the principle holds: behavioral data tells you what happened; interviews tell you why people showed up at all.
Related
- The PM Role Is Dead, Product Thinking Lives — on why product judgment matters more when building is cheap
- Skills vs Insights
- Compressing Content Feedback Loops