UX Research: 5 things that could go wrong during a pilot

Michael A. Morgan
UX Collective
Published in
7 min readJan 8, 2019

--

Doing UX research takes work. A lot. The hurry-and-wait aspects of it make it seem more like preparation than perspiration. No matter how many times I have conducted interviews in my career, I still get butterflies in my stomach and occasional jitters before going into the main event. To mitigate the stress, it always pays to be prepared.

One particular part of the preparation involves taking your discussion guide for a test drive — or something industry professionals and academics call a ‘pilot.’ According to Jonathan Lazar, Professor of Computer and Information Sciences at Towson University and author, pilot studies are an effective way to identify potential biases in your study and should be conducted in the same manner as the actual sessions. Think of the pilot as a time to identify all of the things that could (and will) go wrong in your study; it’s a way to purge all of the unnecessary and irrelevant things that could taint your data collection. It’s also an opportunity to fix the things that are broken before the big day.

So, what are some of the things that could go wrong during your pilot? And how can they be addressed in time before it’s too late?

Here are 5 big areas where things can fly off the rails, along with tips on how to address each of them:

  • Your discussion guide lasts much longer than the allotted study time
  • The study setup doesn’t work as planned
  • The discussion guide question sections don’t flow correctly
  • The language and terminology used in the discussion guide might be misunderstood by participants
  • Finding pilot participants is extremely difficult (maybe even impossible)

Your discussion guide lasts much longer than the allotted study time
Sometimes, a particular topic might take a bit longer to go through with a participant. For example, the question may require us to understand some of the challenges with a particular product. Depending on the product’s history with customers, this question might lead to a richly detailed (but lengthy) discussion, especially if there are many challenges or a few major ones that require anecdotal evidence from the participant. The pilot is the perfect opportunity to get a sense of where the questions that demand more in-depth responses will appear.

Pay careful attention to the duration of each section during the pilot session(s). Ideally, the participant in your pilot will have enough knowledge of the topic to make the conversation length as a realistic as possible, so you can test out the length of time the discussion requires. If you find that you’re going over your allotted session time as a result of asking too many questions or spending too much time on a specific item, then you should ask yourself how important that particular question is to the research goals. If it’s not important, you may want to consider removing it from the study altogether. If it’s relatively less important than other questions, then consider reshuffling the question into another part of your script, perhaps towards the end of the section. That will allow you to pick up the question if there is still time to get to it.

Before going into any pilot session, I attempt to estimate how long each section of my discussion guide should take. This is especially important during exploratory studies that are primarily one-on-one interviews involving mostly conversation. These estimates attempt to capture how long a conversation is going to last; the times should sum up to the total duration of the session, including intros, moderator protocols and wrap-up items. These estimates are ballpark figures you can use as a way to compare against the pilot sessions. If the duration of each section during the pilot is not in-line with the estimates, then this should be updated accordingly. Analyze any particular section of questions that ends up being significantly longer than the estimates and consider removing or modifying it to accommodate the allotted session time.

It takes a certain amount of discipline and editorial acumen to be rigorous about what to leave in and take out (especially when your product stakeholder says “Everything’s equally important”). If a question leads you down a rabbit hole of other discussion points tangential to the topic, then there is a good chance that you can dock and save them for another study.

The study setup doesn’t work as planned
Like a movie director, you had the way the study was going to be set up all planned out in your head: a mobile app study with a camera attached to a helmet hanging over the head of the participant while they swiped away at your newfangled app. The camera would pick up their interactions while they thought out loud and managed to balance the duct-taped camera on their helmeted head.

Joking aside, testing complicated unconventional setups before your real session is absolutely critical to the project’s success. The pilot is the time to try out these sometimes crazy setups to make sure they work as intended. Finding out that a setup is not going to work as expected with the first real participant would be an incredibly costly mistake for a project team.

The pilot enables the team time to come up with a better approach. Let’s say the pilot reveals that the camera on the participant’s head is not an effective means for recording interactions. Perhaps it wobbles too much. As a result, you decide to find a better location to place the camera. A ceiling camera affords interactions to be captured without any wobbling. With additional time to see what does and doesn’t work, better solutions can be found.

The discussion guide question sections don’t flow correctly
It will be immediately clear during the pilot session if a particular segue from the topic of apples to one of, let’s say, oranges is not working. Yes, they are both fruits. One might be tempted to say after the apples discussion, “speaking of fruits, let’s talk about oranges.” This might work, but it might also throw off the flow of the conversation; the moderator suddenly switching from one topic to another might come across as disingenuous (or just plain confusing!) to a participant. They might think “why is this person jumping around so much? What is their intention to shift the discussion from apples to oranges?” Awkwardness ensues.

Learning in a pilot session when a script’s flow can be improved will help improve the fluidity of the final discussion guide questions, thereby ensuring there are the fewest bumps possible. If both topics are needed to answer key research questions, then find an alternate route for inserting oranges into the conversation. There might be better opportunities in the guide to discuss oranges. If the awkwardness remains, then either scope it out of the session if it’s less important, or reframe the initial protocol which introduces the topic of apples so it includes oranges as well.

Reframing will improve the fluidity of the discussion guide since both apples and oranges are now topics for discussion.

The language and terminology used in the discussion guide might be misunderstood by participants
Has this ever happened to you? You come across a fairly technical term that you put into your discussion guide. Your product stakeholders have confirmed this is the language used by their customers. This gives you the assurance that it should be understood by the study participants. However, after further discussion with engineering and sales, they say otherwise. Who is right? The answer is: it doesn’t matter.

What matters is that this is something that should be piloted and tested with customers. During the pilot session ask: “When I say the term [term under debate] what does that mean to you?” See if it make sense to them or if it’s too technical. If there’s an issue with the word, ask the pilot subjects if there might be a better term to use.

Use the insights gained from these initial sessions as a topic for discussion with stakeholders before beginning the real ones. There might be an opportunity to test out which terms make the most sense to customers rather than assuming all customers understand one specific way of describing something. (We all know what happens when we assume.)

Finding pilot participants is extremely difficult (maybe even impossible)
Depending on the type of audience for which your product is tailored, it might be very difficult to find participants for your study, let alone your pilot sessions. Ideally, the pilot participants have some level of familiarity with the product’s domain. This way, the pilot session will be as close as possible to the real thing. Even if they’re not users of your particular product, perhaps they use a competing one? If so, then it would be beneficial to have them participate, as they clearly could be considered prospects of your company’s product.

If all else fails and you are unable to recruit real or close-to-real users of the product, recruit people whose job it is to know the product. These people will act as proxies for real users. People in a sales and service role could serve as great proxies for the product’s user base. While not ideal, it is the next best thing because these individuals should have a relatively deep knowledge of the product and can talk about how customers use it. This information should help you prepare for what to expect to hear from real customers, but should not be used as a part of your report for what customers say and do with your product.

Conclusion
Don’t go into your first day of research sessions with real users without feeling confident that you’ve eliminated the unnecessary and confusing questions in your discussion guide. Piloting your discussion guide with real users (or good proxies) will help you understand how seamlessly (or not) your discussion guide flows. It’s also an opportunity to identify any potential language snafus that might force users to think twice about what you’re asking them. The pilot is the time to get these things wrong. This way, your study gets off on the right foot the very first time you sit down to do research with real users.

References
Lazar, J., Feng, J. H., & Hochheiser, H. (2017). Research methods in human-computer interaction. Amsterdam: Morgan Kaufmann.

--

--