UX & Product Designer

Thoughts

I write about UX design, programming, and journalism—sometimes all at once.

What I Learned from 30 Usability Tests in 30 Minutes

Usability testing is a powerful research method that assesses people's actual behavior, not just their self-reported actions. Well-designed usability tests can help researchers understand the effectiveness of a product, but poorly designed tests can yield unhelpful or even misguided insights.

Before putting any sort of test in front of users, it is important to know what you as a researcher want to know about your product. That way you can design a test that will provide the type of insights that you need.

Last week one of my colleagues challenged our UX team at Viget to put ourselves in our users’ shoes by taking a usability test. I went ahead and took 30 tests, all on the rapid-fire testing site UsabilityHub.com. My takeaways from that 30-minute session are as follows:

 

DON’T ask users about their preferences.

Usability tests are most effective in identifying problems with a design by checking how easily and efficiently users can complete tasks. Tests that ask users about their personal preferences, on the other hand, can be unpredictable.

For example, the test below showed me two versions of a landing page for a wedding website and asked which design I preferred. The versions were so similar, I could not tell the difference between the two until I did a double take. I then realized that the versions differed solely by the hero image.

Two versions of a landing page for a wedding website.

I chose the second version, mainly because I like flowers and the second version has more flowers. All this test tells the researcher is that I like the second version’s photo better. But does it really help the researcher to know that? Wouldn't it be more helpful to know which photo makes the service seem more trustworthy? Or more personal? The test also doesn’t take into account any usability issues. Even those I said I preferred a design, my answers says nothing about the site's flow, navigation, or setup. The researcher may interpret these results as support for a design that could have actual usability issues.

If you must use preference tests, provide enough context and specifics to draw significant insights from consumers. Preference tests should focus on whether the colors, design, and tone support the message and brand that the designer is trying to convey. The versions need to be similar enough to seem related but different enough to warrant a choice. For this past example, the researcher could have shown a screenshot of just the first half of the page and asked: "Wedding Wire wants to appeal to small business and promote a trendy, modern image. Which hero image best fits the message Wedding Wire wants to convey?"

Preference tests should focus on a design's fit with a company's message and brand.

The test above falls just short of a good question. The researcher starts off well by explaining the company’s desired brand and aura, but rather than asking users about fit, the researcher asks users which name they prefer. That question led me to choose “Snack Shack” because I like the way it sounds, even though I actually believe “Mccormick’s Beach House” better evokes an upscale, New England-y, slightly retro lobster joint. 

 

DO ask open-ended usability questions.

One of the biggest drawbacks in online usability tests is that you are unable to probe your participants for more information. A few times during my 30-minute session, I was asked what I would improve in terms of design and usability via open-ended questions. I appreciated having the opportunity to voice issues I saw in the site, which hopefully provided the researcher with some helpful feedback.

Ask your users open-ended usability questions.

DO focus on tasks.

This is an example of a usability test done well. The test below asked me where on the site I would click to remove or edit the yellow dress. The test provided me with a clear mandate, and my answer will demonstrate to the researcher where I expect that utility to be.

An example of an effective task-based test.

DON’T provide users with too much guidance. 

Sometimes, I was asked to navigate to a specific part of a site or app. For example, a test might show a navigation bar and ask: “Find the section about living with osteoporosis."

I would then find the section “Living with Osteoporosis” word-for-word in the navigation bar and click it. A better way to structure this test would be to provide users with a situation and desired outcome, and ask them to find the right information. For instance, the prior question could read: “You have osteoporosis. You want to research ways to make your daily life easier. Where would you click on this site for more information?” 

 

DO provide users with an escape route.

Sometimes a test honestly had me stumped. One test showed me the landing page of a parenting website, and asked where I would click to find a new reading recommendation. I honestly could not locate a single spot on the site where I thought I could find the information. Still, I was forced to pick something just to get through the test. Usability tests should provide an escape route for users—whether it’s an “I can’t find it” catch-all option or an invitation to provide their own answer.

I still don't know where I'd logically find a reading recommendation.

DO recruit the right users.

The question in this test seems pretty straightforward—do you prefer your passwords masked or un-masked? 

Do you prefer your passwords masked or un-masked? 

But my answer is not so simple. I have a different preference depending on the context. Am I entering the password on my phone (unmasked) or laptop (masked)? Is this for my bank’s website (masked) or Facebook (unmasked)? 

If at all possible, recruit test participants that match the target audience for your product. In lieu of that, give users the right context by setting them up with hypothetical situations.

UsabilityHub requires no information upfront from participations. While this may boost participation rates, it also means that researchers have no real information about the folks taking their tests. Researchers need to be aware that they are getting results from people who may very well not be their prime audience.

 

DO understand the pros of online testing.

Online testing can be a valuable means of gaining insights. First, researchers have shown that people act differently (usually more effectively or productively) when observed. Online tests are anonymous—no one is watching your behavior directly, which could bypass this effect. Second, users are distanced from you as a researcher. In person, they may soften their answers in an effort to please you. Online, they can be brutally honest. Finally, online tests may feel more natural because participants can work on their own devices in their own environments. 

 

Hopefully these tips will help you become an even more informed researcher. Go forth, and keep testing!

 

More Resources

To learn how to set up and conduct a usability test, check out the resources below.

Rocket Surgery Made Easy by Steve Krug

"Practical Advice for Testing Content on Websites" by Hoa Loranger

Seven Common Usability Testing Mistakes by Jared Spool

9 Biases In Usability Testing by Jeff Sauro

Note: I am not affiliated with UsabilityHub.