How to vet ideas with customers before betting?

We are very excited to begin down the shape up style of product development. I read the book and resonated with so many things.

However right now our small product team does user interviews with figma mock-ups. They are not pixel perfect but they are designed. And the customer can click around and we watch them, ask questions, etc. We do this with 5 people for any big new idea. Then we iterate on the feedback and modify our designs before building.

(But the rest of the company is also doing adhoc product work, usually just making our best guess and building stuff. We often realize we missed something or there is a deeper policy issue we need to grapple with right in the middle of development.)

Any advice on how we can get customer feedback on concepts during the shaping cycle before the betting phase? Would showing them fat market sketches work?

We have two products: a b2c marketplace and a b2b saas. Maybe the designs are better for the marketplace and the fat marker sketches would work for a tech savvy saas customer.

Or would it be better to run these user interviews during the building cycle where we are mainly going to refine the design and Ui but not wholesale change it.

Any advice would be appreciated on how to integrate real customers into this process!

After practicing Shape Up for a bit, my stance on usability testing and validation through prototype design has shifted. We don’t do customer interviews to validate mockups or even sketches. We also don’t vet feature ideas with customers, except maybe at a very abstract (verbal-only) level.

Instead, we just ask them about where they are struggling at the moment. Not just with our product but with their activities in general. Really focus on their current context and the problems they face now; the problems they’re putting the most energy into fixing. That is what informs our shaping and betting the most.

The closest we ever talk to customers about supply-side solutions is if they already have a pre-conceived idea in their head already, e.g. I’ll ask “So if you could get more information about the room you’re in, when would you do it and why? How would you expect to get that info?” And they paint the picture for us. But we always try to validate just the problem, not the solution they propose.

So while we do continuous discovery and talk to customers frequently, we rarely talk to them about features once they’re shaped or in development. In fact, we don’t ask for user feedback about a feature until after it’s shipped and used in production.

I’ll also be the first to admit this approach isn’t for everyone, but it’s worked for us. All that energy we used to spend on validating design with prototypes is now directed elsewhere.

3 Likes

Very well said. There’s a demand-side activity (JBTD) that involves discovering what your target customers struggle with, why they hire/fire certain products, where they’re attempting to make progress in their lives, etc.

Regarding

My preferred approach is to simply ask them what they’re doing today instead to solve that problem. In most cases they’re not doing anything so it’s not really a struggle they’re willing to solve, and in the cases that they are, you’re provided a starting point in real customer activity to dig deeper.

A more detailed version of my approach to feature requests is here

2 Likes

Thank you for the very thoughtful reply. So if I understand you correctly, you moved the effort you were making to validate designs to earlier in the product process and put more effort into problem exploration and definition.

My question is: why not do both? Why not continue with deeply understanding the pain points as you are now as well as vetting your solutions before building.

Short answer is if your team can afford to do both, and that testing will make your team more confident in shipping on time, it’s worth a try. Maybe you’ll uncover a process that I can learn from at some point in the future.

I think the best case for usability testing before a build is if your feature is so complex and new, it’ll be incredibly expensive to build in code. So you have to design a facsimile to test. Ryan has shared a few stories about how he prototyped the design of hill charts in Basecamp and tested them with users (using Keynote I think). In cases like that, upfront testing pre-build makes perfect sense.

Longer answer is that I don’t know (to steal a phrase from my boss) if the usability testing juice is always worth the squeeze, from my experience.

Some context: Two or three years ago, I was firmly in the “validate before you build” camp. Usability testing is all about reducing risk and variability. So let’s spend lots of time building prototypes and making sure we’re vetting the design before a line of code is written.

It’s basically insurance. You buy usability testing to protect you from building the wrong thing. (In Agile projects that have no circuit breakers and endless sprints, this is insurance worth buying.)

In stark contrast, my interpretation of Shape Up is this: Why waste time and money buying any usability insurance? Isn’t it better to build something quickly and put it out there for customers to use?

After all, the most real, accurate usability testing you can perform is with real users, using real features, built with real code, in a real production environment.

Of course, the downside of building with zero usability testing pre-build is the risk of shipping the wrong thing (or more commonly in my experience, the right thing designed wrong.)

But the most clever, underrated aspect of Shape Up is that six-week circuit breaker. In the extreme worst-case scenario, you build something 100% wrong, you throw away six weeks of work. That’s it. That’s the maximum out-of-pocket loss.

And in practice, if you’re using JTBD/demand-side customer research to understand the context driving your product feature, you won’t ever be 100% wrong. And if you have good discipline to shape and make bets, you reduce your risk of building the wrong thing a little more.

In addition, if your build team sequences their work properly, they should put the riskiest stuff up front. And if that risk is the design, then they should choose to build something rough yet real (in code) and let a few actual customers play with it early on in the build cycle, and do some early stage usability testing and feature vetting.

That becomes a choice the build team should get to make in my opinion, rather than be forced to UX test every feature built in a cycle.

Anyway, all this is theoretical talk. Let me point to two real cases that have “shaped” my views on this.

First was a talk I attended where a highly respected UX researcher went through a case study where they did a year of UX research and usability testing. In-home contextual inquiries, tons of interviews, user diaries, prototypes and testing. Really pulled out all the stops to truly understand the problem space. A textbook example of UX research done right.

Then they built the thing and they were wrong. Not even a little wrong – way off wrong. As much as I admired the researcher’s process, I recoiled at the outcome. Because at the end of the day, it took them more than a year to discover they built the wrong thing.

This is where I began questioning my “validate before you build” approach.

Second is my current project. Before I joined the team as head of product, the UX team did good pre-build research, tested prototypes and got feedback. Good signal as we would say internally.

Then, a funny thing happened. Some of those tested features validated quite well, but some of the features that people said they found useful during testing … those same users never actually used those features in production! Maybe our testing was incomplete or wrong, but you could easily argue that no amount of testing would’ve revealed that. Perhaps that’s an insight that can only be discovered after we shipped it.

So even when we thought we were doing usability testing “properly,” it didn’t guarantee certainty. What people say they’ll use isn’t always what they’ll actually use.

At the end of the day, it’s all tradeoffs. And for me, I’d rather invest more in the problem space and talk to customers, spend nearly zero during the build, and spend the rest of that usability insurance money in measuring how folks are actually using our product in production. But I’ll be the first to admit that spending ratio will be different for every team. Hope this helps.

5 Likes

Agree - I don’t really think there’s a way to objectively validate something with a customer that they can’t actually have right now, because they have no skin in the game. It’s all speculation at that point. To understand a customer’s pain points you have to dig deeper into what they do to make progress, not what they say they want. By default we’re really bad at introspection into our own past “whys” and predicting our future actions because we can only talk to our narrative selves. As far as our brains are concerned, the person who “does” (experiencing self) and the person who “says” (narrative self) are different. We dig deeper for what people truly do to make progress and their specific circumstances in doing so to get as close “to the metal” as we can in discovering real demand.

1 Like

Nelson, thank you so much for your very detailed and super helpful reply! It’s really interesting to see your evolution of thought on the subject.

I especially appreciate your points around: spend your limited usability ‘budget’ on exploring the problem space, talking to customers, as well as measuring usage. But for super risky builds upfront usability testing can also be a fit.