The Secret to Successfully Recruiting Quality Test Participants

How to get the right users into your product testing sessions

Fiona Foster
UX Collective
Published in
5 min readAug 9, 2016

--

Start with a clear research goal

First things first: know your research goal before you begin. It doesn’t matter what kind of test you’re running. Without a clear idea of the information you’re seeking, you will not be able to measure success, and your results will be more difficult to analyze. Identify what you want to learn from the test upfront, and it will ease the rest of the testing process.

How many users do I need?

By now, you’ve probably heard that you can identify about 80% of usability issues with just 5 users. This well-cited statistic comes from the Nielsen Norman Group, and is a great starting goal for most user testing. However, this does not work for every type of testing. Below are guidelines for how many users you need for viable data with other types of research:

  • Card sorting: 15+ users
  • First-click testing: 30+ users
  • Usability testing: 5 users
  • Eye tracking: 40+ users

Recruit from your actual user base

You are not your user. Your friends are not your user. Your coworkers are not your user. For valid research, you need to recruit users who mirror the people who actually use your product. Ideally, you’ll have personas to work from, which will largely define your user base.

You might ask: “Well, why can’t I just have my boyfriend/ friend/ boss/ coworker/ etc participate in the research?” Even if this person isn’t actively involved with your project, they may be just familiar enough with the problem you’re trying to solve that it will skew your results.

Here’s an example of how things can go wrong: On a recent project, I had to fill in a few executive-level employee personas for a card sort, and recruited users from within my agency. Even though I’d selected people with limited knowledge of the project, the results immediately showed an obvious skew. After just two colleagues completed the test, I saw that they had created categories that reflected the IA strategy our team had discussed internally.

Some data is better than none. But unbiased data is always your best choice.

Kick start your user recruitment

Whether or not you work with an outside recruitment group, it’s important to have the following information ready beforehand:

  • The number of users you want to test
  • Age range and gender
  • Education level
  • Country of residence
  • Participant compensation (gift card, dollar amount, etc)

Depending on your user base, you may also want to screen participants by job title, industry, company size, and so on.

The goal is to know your recruitment parameters before you even get started because this will speed up the process of getting a quote and beginning recruitment.

Wait, so… how do I reach my users?

So you’ve identified your users, and this is where it gets tricky: Where do you find these people? Are you going to have to sell my left kidney to afford it? Well, no. There are many ways to get test participants, and will vary depending on how specific your recruitment parameters are.

I could write an entire post just about user recruitment options, but I’m just going to mention my current favorite, Optimal Workshop. I use them for card sorting, tree testing, first click testing, and analyzing user interviews. On average, our tests have run about $10 per user. If you get in touch with their customer support team (who are super helpful), they can screen participants to meet more specific parameters, and put together a custom quote for you.

How to tell if you’ve recruited a bot

Hopefully, this is a non-issue for most UX practitioners, but I’ve run into it a few times, specifically with card sorting. Bots, whether human or automated, can drastically skew your test results. How do you know if your tester is a bot? Some are more obvious than others, but a few tell-tale signs:

  • The user created categories like “egajsdfk”
  • The user completed the sort in significantly less time than all the other participants
  • The category names are in another language… and the English translation? Either part of the test’s instructions, or one of the card names
  • Categories like “group 1,” “group 2,” and “group 3”

I pulled the examples above from a recent card sorting project. Our team was testing a very specific user base with a highly technical list of cards. We deemed any users who completed the test in under 2 minutes as bots. Our cards were far too technical to be completed that quickly with any thought or care.

I also like to include a brief questionnaire at the end of the test to get the user to type some actual words. If the participant can’t form a coherent sentence, that’s another clue that you might have found a bot (or that your participant was really rushed… or maybe just lazy). Though these answers can definitely still be faked, it gives you a little more insight if you have a questionable result.

Refresh your users for iterative testing

If you plan to run multiple tests, it’s important that you get a fresh batch of users every time. This is key to avoid biased results — if someone has already tested your product once, then they have enough familiarity with it to influence their answers the second time around. It’s always best to get a fresh perspective.

Further reading and resources

Like what you read? The links below helped inform this article, and are great resources for further learning.

Also, I’d love to hear about any recruitment tools that you’ve used, whether you loved it or hated it. Please comment below!

Why 5 is the Magic Number for UX Usability Testing

How Many Test Users in a Usability Study?

Recruiting Usability Test Participants

The Cost of User Testing a Website

User Testing with the Right Audience

--

--