User Testing Pitfalls w/ Redshift

Nearly everyone would agree that user testing is an essential part of the product development process. In practice, however, sometimes even the most well-intentioned product managers make basic mistakes when trying to conduct successful usability testing. In this post, David Westen and Diana Cheng from Redshift talk about common testing pitfalls and offer tips on how teams can avoid them.

1) Using a testing approach that’s unnecessarily heavy

David (Principal & Founder, Redshift): One problem I see a lot is people trying to gather too much data through user testing. They’re looking for a level of certainty from testing that can actually hinder instead of help the design process.

Diana (Director of User Research): I agree. As product designers we need to be comfortable working with imperfect information, especially early on. I’d argue no matter how much research you do, until you watch your product being used in the real world, it’s difficult to get to 100% confidence.

David: Yes. I’ve found that we often have to explain to our clients why we test with such a small numbers of users. They’ll ask, “How can you make any good decisions based on talking to five people?” What’s your answer to that?

Diana: Well, if you’re trying to test basic usability, it’s well known from multiple sources that five people can reveal 85% of your problems. It’s not going to cover everything, but you’re going to find the most glaring issues you need to fix. Five users is also typically enough to let you start seeing some patterns of behavior. After that it often becomes a law of diminishing returns.

David: You get an incremental amount of additional confidence for a lot more work. The thing I always say to clients is, by using such a small number of subjects, it allows you to test early and more often using the same amount of resources. You get answers faster, and you can start making adjustments more quickly.

Diana: Another way to put it is, you should be aiming for multiple rounds of “just good enough” research that give you higher confidence each time.

2) Seeking validation for a path already decided

Diana: Something else I see: sometimes clients specifically ask us to look for quotes and data that help them make a certain case about their product. We all want our products to succeed, so I understand the motivation, but sometimes there’s a tendency to seek validation for a path that’s already been decided, which is the wrong way to use research.

David: That can happen in so many subtle ways. The way you ask your questions can so easily suggest a certain answer.

Diana: I cringe a little bit when I hear user testing questions like, “Would you say that this was confusing to use?” No! Don’t ask that! You’re leading the witness.

David: Or a question like, “Did you find this navigation frustrating?” “Would you be excited about a new service that let you etc. etc.?” The question is, how do we avoid leading the witness like this?

Diana: It’s not easy. Having a good plan or study protocol with unbiased questions should help, but one of our responsibilities as researchers  is to recognize our own biases. You should almost be trying to prove the opposite of whatever your hypothesis is.

David: Yes! You need to aim to disprove your assumptions. The tendency is, you want your hypothesis to be validated, so there’s a desire to seek out answers and data that confirm it, when you really ought to be trying to poke holes in it.

3) Asking users for the answers

David: One thing I wonder about is, as organizations have embraced the idea of user-centric design, there’s been a little bit of a tendency sometimes to try and get users to give you a design solution. I see users being asked questions like, “What kind of features would you like to see on this dashboard? What do you want to happen when you click this button?”

You’re talking about solutions with your users instead of the problem. And I think the risk of that is, you’re forfeiting the role of the designer.

Diana: It’s like the famous quote from Henry Ford. “If I had asked people what they wanted, they’d say faster horses.” It doesn’t mean the user is wrong, but let’s remember the user is not the designer and may not always be able to articulate what they need. When a person says I wish this had x, y, or z features — it’s our job as researchers to understand what problem the user is trying to address by posing those features.

David: The danger is when you take those suggestions very literally. And there’s pressure because we don’t want to be perceived as “ignoring our users.”

Diana: You should be doing research to figure out the problem you’re trying to solve. What are the key pain points? What’s not working?

David: I’ve also seen product managers try to delegate aesthetic choices by asking users, “Which one of these designs do you like better? Or, do you like the red one or the blue one?”

But there’s a nuance here, isn’t there? I think if we see that a visual design is turning people off, then that’s clearly a problem.

Diana: If a user gives you design feedback that’s fine, but what’s important is how you interpret that input. It’s when you’re interpreting their “why” that you need to do the hard work and figure out, are they not liking the design because of a purely aesthetic concern, or are they not liking it because we’re making it harder for them to perform a specific task?

At the end of the day, research is necessary and great at identifying problems and opportunities, and guiding you in the right direction. But it’s not necessarily there to tell you the solution!


  1. Aim for multiple rounds of “just good enough” testing
  2. Recognize your own bias and try to prove the opposite of your hypothesis
  3. Use testing to focus on uncovering usability problems, not necessarily design solutions

About the Authors

David Westen, Principal & Founder of Redshift Digital is an expert in internet technology, user experience, and digital business strategy.

In 1999 David co-founded Internet Learning Corporation (ILC), a VC-backed startup focusing on creative, customized e-learning solutions, which was acquired in 2002 by A.S.K. Learning. During his seven years as CTO of A.S.K., David led the development of e-learning initiatives for clients including HP, Cisco Systems, Veritas, EMC, Sony, Commonwealth Bank and PriceWaterHouseCoopers.

David is a graduate of Stanford University.

Diana Cheng is the Director of User Research at Redshift.

She has a BA from UC Berkeley in Political Science and Art. She received her Masters from the IIT Institute of Design in Chicago, which teaches ethnography and other social science methods as part of the human-centered design process. Prior to Redshift, she held design, strategy and innovation roles for such companies as Google, Panasonic, and Jump Associates. She is a regular contributor to the Design Thinking program at the Rochester Institute of Technology.

When not channeling the voice of the user, Diana can often be found eating ice cream and sipping matcha.

Products That Count is one of the largest communities of product managers, leaders and entrepreneurs in the world. It provides insider access to founders and C-level execs such as Netflix Product VP, Crossing the Chasm legendary author, Trulia Founder, or Lyft CMO, via speaker series, podcasts, and invite-only executive retreats. Partners include WeChat, Yelp, LeanPlum, Pragmatic Marketing, and StartupDigest. Its venture arm, Mighty Capital, invests in companies building products that count once they have demonstrated product/market fit.