Short answer is if your team can afford to do both, and that testing will make your team more confident in shipping on time, it’s worth a try. Maybe you’ll uncover a process that I can learn from at some point in the future.
I think the best case for usability testing before a build is if your feature is so complex and new, it’ll be incredibly expensive to build in code. So you have to design a facsimile to test. Ryan has shared a few stories about how he prototyped the design of hill charts in Basecamp and tested them with users (using Keynote I think). In cases like that, upfront testing pre-build makes perfect sense.
Longer answer is that I don’t know (to steal a phrase from my boss) if the usability testing juice is always worth the squeeze, from my experience.
Some context: Two or three years ago, I was firmly in the “validate before you build” camp. Usability testing is all about reducing risk and variability. So let’s spend lots of time building prototypes and making sure we’re vetting the design before a line of code is written.
It’s basically insurance. You buy usability testing to protect you from building the wrong thing. (In Agile projects that have no circuit breakers and endless sprints, this is insurance worth buying.)
In stark contrast, my interpretation of Shape Up is this: Why waste time and money buying any usability insurance? Isn’t it better to build something quickly and put it out there for customers to use?
After all, the most real, accurate usability testing you can perform is with real users, using real features, built with real code, in a real production environment.
Of course, the downside of building with zero usability testing pre-build is the risk of shipping the wrong thing (or more commonly in my experience, the right thing designed wrong.)
But the most clever, underrated aspect of Shape Up is that six-week circuit breaker. In the extreme worst-case scenario, you build something 100% wrong, you throw away six weeks of work. That’s it. That’s the maximum out-of-pocket loss.
And in practice, if you’re using JTBD/demand-side customer research to understand the context driving your product feature, you won’t ever be 100% wrong. And if you have good discipline to shape and make bets, you reduce your risk of building the wrong thing a little more.
In addition, if your build team sequences their work properly, they should put the riskiest stuff up front. And if that risk is the design, then they should choose to build something rough yet real (in code) and let a few actual customers play with it early on in the build cycle, and do some early stage usability testing and feature vetting.
That becomes a choice the build team should get to make in my opinion, rather than be forced to UX test every feature built in a cycle.
Anyway, all this is theoretical talk. Let me point to two real cases that have “shaped” my views on this.
First was a talk I attended where a highly respected UX researcher went through a case study where they did a year of UX research and usability testing. In-home contextual inquiries, tons of interviews, user diaries, prototypes and testing. Really pulled out all the stops to truly understand the problem space. A textbook example of UX research done right.
Then they built the thing and they were wrong. Not even a little wrong – way off wrong. As much as I admired the researcher’s process, I recoiled at the outcome. Because at the end of the day, it took them more than a year to discover they built the wrong thing.
This is where I began questioning my “validate before you build” approach.
Second is my current project. Before I joined the team as head of product, the UX team did good pre-build research, tested prototypes and got feedback. Good signal as we would say internally.
Then, a funny thing happened. Some of those tested features validated quite well, but some of the features that people said they found useful during testing … those same users never actually used those features in production! Maybe our testing was incomplete or wrong, but you could easily argue that no amount of testing would’ve revealed that. Perhaps that’s an insight that can only be discovered after we shipped it.
So even when we thought we were doing usability testing “properly,” it didn’t guarantee certainty. What people say they’ll use isn’t always what they’ll actually use.
At the end of the day, it’s all tradeoffs. And for me, I’d rather invest more in the problem space and talk to customers, spend nearly zero during the build, and spend the rest of that usability insurance money in measuring how folks are actually using our product in production. But I’ll be the first to admit that spending ratio will be different for every team. Hope this helps.