Short answer is if your team can afford to do both, and that testing will make your team more confident in shipping on time, itâs worth a try. Maybe youâll uncover a process that I can learn from at some point in the future.
I think the best case for usability testing before a build is if your feature is so complex and new, itâll be incredibly expensive to build in code. So you have to design a facsimile to test. Ryan has shared a few stories about how he prototyped the design of hill charts in Basecamp and tested them with users (using Keynote I think). In cases like that, upfront testing pre-build makes perfect sense.
Longer answer is that I donât know (to steal a phrase from my boss) if the usability testing juice is always worth the squeeze, from my experience.
Some context: Two or three years ago, I was firmly in the âvalidate before you buildâ camp. Usability testing is all about reducing risk and variability. So letâs spend lots of time building prototypes and making sure weâre vetting the design before a line of code is written.
Itâs basically insurance. You buy usability testing to protect you from building the wrong thing. (In Agile projects that have no circuit breakers and endless sprints, this is insurance worth buying.)
In stark contrast, my interpretation of Shape Up is this: Why waste time and money buying any usability insurance? Isnât it better to build something quickly and put it out there for customers to use?
After all, the most real, accurate usability testing you can perform is with real users, using real features, built with real code, in a real production environment.
Of course, the downside of building with zero usability testing pre-build is the risk of shipping the wrong thing (or more commonly in my experience, the right thing designed wrong.)
But the most clever, underrated aspect of Shape Up is that six-week circuit breaker. In the extreme worst-case scenario, you build something 100% wrong, you throw away six weeks of work. Thatâs it. Thatâs the maximum out-of-pocket loss.
And in practice, if youâre using JTBD/demand-side customer research to understand the context driving your product feature, you wonât ever be 100% wrong. And if you have good discipline to shape and make bets, you reduce your risk of building the wrong thing a little more.
In addition, if your build team sequences their work properly, they should put the riskiest stuff up front. And if that risk is the design, then they should choose to build something rough yet real (in code) and let a few actual customers play with it early on in the build cycle, and do some early stage usability testing and feature vetting.
That becomes a choice the build team should get to make in my opinion, rather than be forced to UX test every feature built in a cycle.
Anyway, all this is theoretical talk. Let me point to two real cases that have âshapedâ my views on this.
First was a talk I attended where a highly respected UX researcher went through a case study where they did a year of UX research and usability testing. In-home contextual inquiries, tons of interviews, user diaries, prototypes and testing. Really pulled out all the stops to truly understand the problem space. A textbook example of UX research done right.
Then they built the thing and they were wrong. Not even a little wrong â way off wrong. As much as I admired the researcherâs process, I recoiled at the outcome. Because at the end of the day, it took them more than a year to discover they built the wrong thing.
This is where I began questioning my âvalidate before you buildâ approach.
Second is my current project. Before I joined the team as head of product, the UX team did good pre-build research, tested prototypes and got feedback. Good signal as we would say internally.
Then, a funny thing happened. Some of those tested features validated quite well, but some of the features that people said they found useful during testing ⌠those same users never actually used those features in production! Maybe our testing was incomplete or wrong, but you could easily argue that no amount of testing wouldâve revealed that. Perhaps thatâs an insight that can only be discovered after we shipped it.
So even when we thought we were doing usability testing âproperly,â it didnât guarantee certainty. What people say theyâll use isnât always what theyâll actually use.
At the end of the day, itâs all tradeoffs. And for me, Iâd rather invest more in the problem space and talk to customers, spend nearly zero during the build, and spend the rest of that usability insurance money in measuring how folks are actually using our product in production. But Iâll be the first to admit that spending ratio will be different for every team. Hope this helps.