Test desirability before usability

We are busy building product ideas that are based on assumptions, and design briefs are often full of them. So before we get anywhere near building a Minimum Viable Product, what can we do instead?

Test desirability before usability

If we build it, they will come, right? Or that’s the fairytale Hollywood would have us believe. But up the road in Silicon Valley, they know a different cold hard reality: that 90% of startups fail and 70% of new products fail. We often build digital products and test them for usability, but the users simply don’t show up or fail to convert. So what is going wrong?


In essence, the desirability for those products just isn’t there. We are more often wrong about what we think the user will want than we are right. We are busy building product ideas that are based on assumptions, and design briefs are often full of them. So before we get anywhere near building a Minimum Viable Product, what can we do instead?


Test Assumptions, Not Ideas

Testing our assumptions about what is desirable to the user, through a series of product experiments, allows us to gather the evidence we need to proceed or halt with building those ideas. Whether our ideas are based on opinions, egos, a hunch, best practices, user research or pure instinct, experiments allow us to ‘call our shot’ and find out if we are likely to be right or spectacularly wrong. Exercises such as assumption mapping help us identify our riskiest assumptions ie. the ones we have the least evidence for, and are most likely to cause the product to fail if we’re wrong. We then test those assumptions first, with the aim of de-risking our design brief by building up a strong evidence base for our product.


So how do we conduct an experiment?

Take on the role of a scientist. Having chosen a desirability assumption to test, start by defining a hypothesis with a clear cause and effect. Then decide on a test method you will use to simulate this cause, define what metric you will use to prove the hypothesis right or wrong, and detail what actions you will take if you are proved right, wrong or inconclusive. For example:


  • Assumption - Users will be willing to upgrade to add a pitch video to their profile

  • Hypothesis - If we add ‘pitch video’ to our business level profile features (cause), then more standard level users will upgrade to business level (effect)

  • Test - We will use a landing page fake door test to explain the benefits of this feature with an upgrade CTA. We will link to this via an email sent to standard-level users

  • Metric - X out of X standard level users will click the upgrade CTA over X number of days

  • Actions - If we are right we will follow this up with a feature stub test on the profile page. If we are wrong we will test another feature instead. If it’s inconclusive we will run the test again but link to it via an ad


Lo-fi → Hi-fi

Test methods start off lo-fi such as a quick one-question survey that asks people about their recent user behaviour. And become more hi-fi over time, as you build up a clearer picture of desirability and what it is your product will likely be. Knowing where to gather your data from is crucial, so really flesh out your personas to include exactly where your user group hangs out online, so you’re posting that survey or fake door test link for maximum effect. 


Fake it till you make it

So before you get anywhere near the stage of your team wireframing, styling, building and usability testing that dream MVP, make sure you conduct a series of Minimum Viable Experiments to thoroughly de-risk that brief and give your startup or product the very best chance of success. If you build it (based on evidence), they will more likely come.

🤞