When not to use surveys as a primary research tool

Surveys are giving you false data?

Stephanie Orkuma
UX Collective

--

Overhead shot of a chat and office items
Photo by RODNAE productions via pexels.com

You’ve just launched a new product with an amazing interface design and research-driven experience. You’re elated. It is your first rodeo, and you are excited about all the wonderful feedback you’re going to get. A few months in, however, you discover that you barely have any downloads on the app store, and those who downloaded, barely used the key features introduced.

You sit down with your team and wonder why this is happening. You carried out research and every decision was backed by data. The people said they loved the idea and want a product like this. So, what is the problem?

Well, I’ll tell you for free. Your design was not driven by data, it was driven by false data. And therein lies all your product woes. The research that served as the backbone, the foundation, of the entire product…was built on data that is untrue.

What you did, was design a Google Form asking people “What do you think of a product that connects people struggling with an addiction to a treatment centre that is right for them?” And the majority of respondents said they love the idea so you went ahead and designed the solution. That wasn’t real data you gathered. That was very false and misleading data.

(By the way, this actually happened and Jaime Levy talks about it in her book, Ux Strategy: How to Devise Innovative Digital Products that People Want. When Jaime conducted interviews, she found out that people did not actually want the product…at least not based on the current business model. Sadly this was after the founder had conducted online surveys, launched, and gotten zero customers in 18 months.)

Here’s how you get false data

The reason for conducting user research is to understand the needs and goals of your target audience in order to inform the decisions you make for your product. Now to achieve this, you must select the right method that has the potential to produce the most accurate data. Most times, surveys just aren’t the right route to go.

I will give you instances where surveys just will not cut it. But first, let’s talk about ways that a survey can return false data to you.

1. Asking users what they like

People say and believe they like things that they do not actually like. Sounds untrue but it isn’t. “Often, asking a question directly is the worst way to get a true and useful answer to that question. Because humans.” (from On Surveys by Erika Hall)

Like the example above, respondents may easily resort to acquiescence bias when asked whether they like the idea you are building. They realize this is your idea and because they want to please you, they say it is a nice idea and maybe even say it is something they’ve wanted. But how can you verify this?

You can’t. Not with a survey.

2. Poor sampling

Your target users are young people living in your country and most of your followers are young people like you so you design a survey and post it on Twitter and literally anyone can fill it. That is a risk.

You say you can curtail this risk by asking their age range in the form and that’s nice coz then you see that 30% of the respondents don’t fit your archetype and you note that down. But do you also take out the responses of this 30% from the data gotten overall or do you just overlook that? And if you decide to take it out, how do you know how many marked yes and how many marked no to question 7?

Do you see where I am going with this? Once your survey has been “contaminated” it is difficult to clean it up. Also, when working with quantifiable data, you want to aim for a good enough representation of your users in numbers. What this means is “if you have 10,000 users, you need to complete your survey by at least 1,937 of them to represent the entire user base.” (from Surveys are the McDonald’s of UX Methods by The Fountain Institute).

You can work with the 10–50 responses you got, but you have to remember to treat such data as early signals and not proof of users’ needs, preferences etc.

3. Asking users to predict the future

“From 1–5, how likely are you to use this product?” How would the user truly know? They can say 1 or 2 and you go back struggling to see where the problem is when in truth, their behavioural patterns show that they are very likely to use your product. But you wouldn’t know that because your research tool cannot accurately provide that information.

4. Absence of context

A lot of juicy details are hidden behind follow-up questions, but if you don’t ask them, you’ll miss out on all that useful information. You may lack the context to most of the answers given and based on how the questions were worded, you may end up more confused than you were at the start of the research.

5. Using surveys wrongly

And by this, I mean choosing surveys to gather data that is best gathered through other research methods. I’ll talk about this in the next section below.

How I see it, surveys just sit around and think of all the ways they can return false data to you. Here’s how Erika Hall puts it “It’s much much harder to write a good survey than to conduct good qualitative user research. Given a decently representative research participant, you could sit down, shut up, turn on the recorder, and get good data just by letting them talk. (The screening process that gets you that participant is a topic for another day.) But if you write bad survey questions, you get bad data at scale with no chance of recovery.”

Jeff Humble, writing for The Fountain Institute described them as “the fastest way to get terrible ideas for designing products.” Damn, that’s harsh, but you cannot deny the truth in those words.

When Not to Use Surveys

It is worth mentioning that as I speak on surveys, I do not necessarily refer to screener surveys. I refer to surveys used as a primary research method on a project. In fact, Franck Spillers says of surveys “oh yes, surveys are fine IF, and only if, you are using them in conjunction with qual-itative user research. They provide a nice quant-itative contrast to contextual field data that is usually motivated by user emotions.” While I do not agree that surveys should only be used when mixed with qualitative research, I can say that this may be a good way of using surveys if you need both quantitative and qualitative data.

What I think people tend to miss is that surveys are not substitutes. You do not decide to use surveys just because you are low on time, money, or qualitative research knowledge. According to Erika Hall, they are not a fallback for when you can’t do the right type of research. Carefully determine what research method fits your goals and pursue that instead.

Here are examples of times you should not use a survey

1. To get feedback on a product, feature, or process

Acquiescence bias readily jumps into play here. And when it doesn’t, the respondent may unknowingly give erroneous data.

For example, you want to know if users are able to accomplish their tasks on sites that are heavy on interaction design and aesthetics. You want to know if they can easily navigate them and find items in good time. If you design a survey asking them this, people who love interactions and motion design will say yes it is easy to accomplish tasks. Those who find excessive interactions irritating will say they have a lot of difficulties. Either group may be lying, but you’d innocently sit and analyze your data of lies. Don’t do this.

Conduct a usability test instead. Watch these people interact with the products then calculate how much time it took them to accomplish a task and if they were successful at all.

2. To find out the utility of a product or feature

As established earlier, asking people direct questions does not yield good data. Instead seek to learn about their behaviours, attitudes, experiences, etc and see how these affect the product you are building. Look out for how they currently solve the problem you want to solve and find insights from that that can influence your design.

These details are difficult to share in a survey. Even if you can ask all the right questions, it will be tedious for the respondent to do all that typing so they’d either give really short (unhelpful) answers or simply abandon the survey. You do not want that. Conduct interviews instead.

3. To uncover user preference

You are torn between blue buttons and white buttons and so you design a “quick survey” to identify user preferences. Why?

Conduct an A/B test and see which performs better. Do not rely on what the users say (alone).

It is easy to get false data from surveys, use them sparingly. Ensure that quantitative data is what actually informs your research goals before settling for a survey. And even then, be wary of the questions you ask. If you want to learn more about choosing quantitative or qualitative methods in research, this article by the Nielson Norman Group may help.

--

--

At the intersection of UX and Business Strategy…UX Strategy. That’s where you’d find me. And also more easily at thedadadesigner@gmail.com