How to spot and fix poor survey questions

Survey — the holy grail of understanding your customers at scale. But is it? It depends on how you ask questions.

Braňo Šandala
UX Collective

--

Photo by Thierry Fillieul from Pexels

A survey is a popular technique to collect quantitative data. And also, it’s one of the most difficult techniques to master. The challenge is to design survey questions so you collect relevant data.

We all hope to ask questions to get actionable answers to build our decisions upon. When you design a survey, you strive to form questions that all respondents understand in the same way and without bias. Otherwise, you’re looking for trouble — you’ll get insufficient or erroneous answers. What is even worse, you may not even know that your collected data are spoiled.

I’d like to show you how you can identify troublemakers and fix them. Let me walk you through frequent question errors and how to turn them into inquiries that will bring you relevant answers. You’ll able to review survey questions (e.g. the ones that others suggest you should ask) and evaluate whether they’re worth asking.

Double-barreled (or compound) questions

What do you get when you ask two questions at once and provide only one set of answers? A mess in your resulting data — as well as — a double-barreled question. Here’s an example:

Have you used our website or documentation?
— Yes
— No

If respondents answer “Yes”, did they use your website or documentation? Wouldn’t you like to know? As a rule of thumb, watch for “and/or” to spot compound questions and split them into separate questions to obtain more relevant results.

By the way, the newly-split question — Have you used our website? — is still misty. Would you be interested in the answers of users who used your website only once, three years ago? What’s missing here is an aspect of time. Based on your goals, you could limit your question to the desired time frame (Did you use our website in the last week?), or further, you could ask about the frequency of use (How many times did you use our website in the last week?).

Hypothetical questions

We, humans, are terrible at predicting the future. Unless your respondents own a crystal ball, it’s a waste of time to ask them what they would do under some imaginary circumstances at any point in time. Here’s a quintessential hypothetical question:

What would you do if one of your family members became permanently disabled?
— I would try to take care of them myself or with the help of family members
— I would use at-home nursing service
— I would rely on the help of institutions that provide ongoing care
— Other: _____________
— I don’t know
— I don’t wish to answer

If you haven’t taken care of a disabled person you couldn’t possibly imagine what you would do once it happened. Even if you had such experience, it’s still difficult to envision how you would handle another situation, since there is a vast range of disabilities.

Sadly, there is no straightforward remedy for hypothetical questions. No one can foresee their future actions. Fortunately, it’s more likely that people can tell you about their past. And that’s how you can go around a hypothetical question.

In this case, you can narrow your respondents pool to those who have permanently disabled loved ones. Ask them how they resolved the situation in the past. Slice the answers according to the types of disabilities (or another relevant factor). Then, you’ll be able to hypothesize the future behavior, based on the range of past behaviors. With this approach, you’ll have past data you can rely on. Now, you’re the one holding the crystal ball.

Bonus: 9 times out of 10 (as well as in the example above), you’ll deal with a hypothetical question about how would respondents behave. Sometimes, you may tackle a hypothetical question about what respondents believe in. To fix questions about a belief, I’d recommend reading the “professional footballer” example at Q-set.

Leading questions

To answer honestly or to answer favorably? This is a leading question:

We feel we have improved in many key areas lately. How would you rate your experience with our customer support?
— 1–5 stars

We all desire to be praised for our work. But when you seek to find how your customer support performs, you want honest answers. Unless you need to report oddly positive ratings to your boss. Putting a toxic company culture aside, the example above will lead your respondents to answer more favorably. The problem part is the praise intro. What if the respondents haven’t noticed the improvements you are suggesting? They may feel ashamed for it. So they rather choose the path of conformity — they’ll give a more positive answer to stick with what question suggests.

Remedy these questions and seek neutral wording. Remove the opinions or phrases that steer the respondent off the path to an unbiased answer.

Loaded questions

Some questions come loaded with assumptions. Assumption that respondents possess a context, knowledge, expertize or experience to answer the question accurately. Armed with what you know, how would you answer the following question?

How would you characterize team structure at your company?
— Flat
— Hierarchy
— Holacracy
— Squads
— Other: _____________

Loaded questions are rather difficult to reveal since you have to turn your empathy levels to eleven. Let’s say, as a researcher, you study how organizations are structured. You have reviewed tons of materials and you understand deeply all the organizational terms and jargon. As a respondent, you’re a developer in a team of five. You know your team leader reports directly to CPO. You’re focused on an outcome, and you don’t care what’s the name of the method your team works by.

If you had learned about respondent’s abilities beforehand, you wouldn’t have asked the question above. And yet the surveys are packed with loaded questions. Why is it so? We project our experience onto others. It is comfortable to think that my world is the same as your world. And it is difficult to step out of one’s own shadow. You have two options to get rid of loaded questions: either learn to be aware of your ego overshadowing the survey, or ask someone else (ideally a couple of the pilot respondents) to review the questions.

Questions with inadequate response options

In surveys, we strive to ask closed questions to speed up an analysis of quantitative data. Besides asking the right question, it’s equally important to provide a relevant set of complete and exclusive answers. The example below is failing to do so:

What is the average age of your employees?
— 19–25
— 25–35
— 35–45

If you’re 25, which option would you select? And what if you’re 46? Double-check your response options by trying to answer the question yourself. Test for boundary or extreme values and adjust the answers so they are complete and exclusive (e.g. …, 25–35, 36–45, 46+). Once you’re happy with the result, you can give this question a further thought. Is it even answerable? Let’s find out in the next section.

Questions that expect a non-trivial calculation

In the question above, were you trying to calculate the average age of your colleagues? How would you solve the following math problem:

What percentage of junior roles is at your company?

First, count all your employees. Then, decide what junior role means at your company. Count all junior employees and divide them by the number of all employees. Done.

When you are considering a question that requires a calculation, ensure you know the answers to these questions first:

  • How likely is that respondents will have all data available when filling out the survey?
  • How likely is that all respondents understand the meaning of junior role in the same way?
  • How likely is that respondents are able to calculate the result?
  • How likely is that respondents are willing to calculate the result?

Unless you’re certain of positive answers, you can count only on one thing — the original question will be answered as a wild guess, if at all. You’ll end up with random data and you may not even know about that.

Questions that require calculation carry a lot of uncertainty. If you cannot avoid them, lower the risk to receive more precise answers. Here’s what you can do. Don’t bother respondents with calculation and ask them for both numbers instead (if again, you’re certain, they know them). Ask them about the number of employees. Ask them for the number of employees that are less than one year with a company — if this is the most relevant attribute that correlates with a junior role description. This way you give all respondents a description they can think about in a uniform manner. Finally, calculate the percentage yourself based on data provided.

Questions about a combined experience

You’ve learned from the previous example that letting respondents compute the answers may not be the brightest idea. Even when you knew they only needed to divide two numbers, you couldn’t rely on that. What’s coming next is even worse. The following question requires respondents to compute much more:

What are your expectations when trying a new tool?
— Better Team–Client communication
— Work efficiency improvements
— Better ROI
— Time savings
— Ease of use for developer
— Ease of use for the client
— Flexibility and scalability — I can use only what I need

Let’s have a look into a respondent’s mind to find out what’s computing there: “Ok, well, it’s an interesting question. I guess I have evaluated five tools this year. I’m not sure what we’re trying to achieve with the evaluation of the last one. I was involved in it only to find out how developers may like it. But before that, oh boy, that one was a huge replacement for an HR management system. We were hoping to find a new tool that will save us at least 30 % of monthly costs. Hmm. What was the tool we evaluated before that? Oh yeah, it was a new communication platform for a team. My god, I’m bored. I guess that should be enough for the answer. I’ll go with number two, three and five. Next question, here we go!”

Wait, what just happened? The respondent combined experiences by cherry-picking what they could recall. In a survey with hundreds of participants, you can only imagine how others would approach this question. They may average out the result. They may pick the most pleasant experience. Or the recent one. Or whatever. You’ve just ordered a gumbo of answers that you cannot compare or analyze.

Depending on what you wish to learn, there are a couple of ways to improve this question. In all cases, you need to limit it so the respondent can answer in the context of a single experience. If you want to know about the motivation to change an HR system, limit the question to the last experience with an HR system evaluation. If you want to explore how people evaluate different tools, ask about the motivation of the last evaluation, plus ask about the tool that was evaluated.

Summary

If you’re a few days before launching the survey and there is no coming back, use this list of problematic questions to review yours. Take these hints as the minimum you can do, so the whole effort of getting to know your customers doesn’t end up in a waste.

💡 Have you found the article insightful? Buy me a coffee to fuel more posts on the topic. Thanks!

--

--

As a freelance product designer, I help startups and software companies turn bold product ideas into thriving businesses.