UX RESEARCH

Successful navigation of the UX research landscape

Defining and choosing user research methodologies, conducting interviews, and presenting findings

Nima Torabi

--

User experience is all about the design of a product or service that fits the needs of its target users. UX research is all about:

  • Thoroughly understanding the target audience, their needs, goals, and the context in which they’ll interact with the product or service
  • How well the product or service will serve the target users' needs, otherwise termed as finding and improving product-market-fit
  • Continuously uncovering opportunities to create new or update current features that better-fit users’ needs. In other words, UX research helps inform design decisions and ensure that a business is meeting and exceeding the expectations of users

There are four major ways that UX research helps a business:

  • Saving costs by building the right thing — uncovering the true needs of users means that product teams better understand what solutions will work so they never waste development time on features that won’t be valuable to users, and account for requirements from the beginning. Additionally, having a clear goal from the beginning of a project allows decision-making to happen faster and avoids the possibility of rework. There are reports that indicate the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.
  • Further saving costs by building it right — product teams perform research throughout development to ensure that they’re implementing the solution in a way that is easy to use. Iterative development methodologies help UX teams uncover the changing needs of customers, so they can pivot if needed, reducing the possibility of wasted development time or rework.
  • Increasing customer happiness and loyalty — customers and users have ever-growing expectations for positive experiences with digital products and services. Providing a baseline good experience is no longer a differentiator, but rather the expected minimum. Ensuring ease of use means it’s less likely that customers will go to a competitor for the same service. For example, Apple’s customers line up to purchase the latest gadgets at full cost, because they constantly have a great experience using Apple’s products and services. Happy and loyal customers are more likely to repurchase and encourage friends or family to purchase as well.
  • Uncovering opportunities for improvements or new features — UX research allows product teams to see both what is resonating well, and what could be better with the product or service, in the context that it gets used. For example, teams may notice users using workarounds for difficult tasks, or notice that they use other tools to fill a need that current products or services do not serve. Such information can help teams plan future fixes or features and solutions that they had previously not considered.

Various types of UX research methodology

i — Usability testing

Usability testing is one of the most often used approaches in user experience research. Some of their features include:

  • It can be moderated or unmoderatedin a moderated environment, a moderator asks participants to perform tasks, observes where they run into trouble or questions, and asks follow-up questions to understand their thought process.
  • It can be in person or remotely via online video conference tools for example
  • This can be done on any live site or piece of software including competitors, or a prototype of any fidelity
  • Can be used to help choose between design alternatives — they are also particularly effective for discovering issues that impede the experience

Some of the most common usability testing tools include usertesting, usabilla, userzoom, loop11, and usabilityhub. The industry reference for usability testing is Steve Krug’s book, Rocket Surgery Made Easy. You can find the book and a sample usability test interview in the link below:

ii — Interviewing

With an interview, you sit down with a participant and ask them open-ended questions about their needs, goals, and motivations to gather qualitative information. When possible, you perform the interview in the place where the users would usually be interacting with whatever you plan to build and observe their natural behavior. This is referred to as an ethnographic interview or contextual inquiry.

Interviews can be conducted live or remotely and are mainly used to:

  • Learn about different types of users and differences in their behaviors
  • Gauge user's outlook, attitudes, and impressions of specific items
  • Create user personas

The industry reference for interviewing users is Steve Portigal’s book, Interviewing Users — how to Uncover Compelling Insights. You can find the book and a sample usability test interview in the links below:

iii — Card sorting

Card sorting is a particular quantitative method used to help determine categorization and hierarchy when determining information architecture. There are two navigation structures for card sorting:

  • Open — where you ask participants to categorize elements that you need to organize into whatever groupings you think make sense and then label them
  • Closed — when you already have a set of navigation structures or hierarchy and ask participants to place elements within the buckets

There are several great digital card-sorting tools out there such as Optimal Workshop, but you can also use sticky notes and whiteboards. The industry reference for card sorting research is Donna Spencer’s book, Card Sorting.

iv — Eye tracking and click testing

Eye-tracking is a method that utilizes equipment that captures and analyzes where a person is looking. There’s also technology that can capture and analyze the clicking and scrolling behaviors of users which is usually referred to as click tracking or scroll tracking. These technologies are mainly used for Live websites, software, or applications, aiming to serve as a true understanding of what actions users are taking without having to rely on their memory or ability to self-report. However, it can not provide context as to why users are behaving in such ways.

The most prevalent tools for eye tracking and click testing include Tobii, Crazy Egg, Clicktale, UserZoom, and Chalkmark.

v — Multivariate testing and A/B testing

Multivariate testing is a method where design teams create several versions of a product and compare which one does the best job at hitting goals, for example, changing a button to three different colors and seeing which gets the most signups on a page. A special case of multivariate testing is A/B testing, where we only compare two items rather than many.

Multivariate tests are always conducted on live products to optimize performance, whether that means creating the most clicks, conversions, signups, etc.

Some online tools that can help with multivariate testing include Optimizely, Visual Website Optimizer, and Google Website Optimizer.

vi — Desirability studies

Desirability studies allow product teams to ensure that their visuals match their brand goals and evoke the desired emotional response in the users. There are several variations but the most common is when you show participants variations of visual designs and ask them to select which words best describe each. The list of words given is based on the words that best describe the brand goals and their opposites. Teams can then analyze which of the designs evokes the most positive associations. In this variation, teams could do sessions in person and ask qualitative follow-up questions.

Conducting desirability studies is quite simple and straightforward. You will need printouts of the visual designs, a list of descriptions, and a way to take notes. To reach a broader audience set, teams could automate the collection of responses using any remote survey tool that allows both screenshots and multiple-answer question types.

vii — Expert or heuristic reviews

Expert reviews are detailed assessments of an interface, service, or product conducted by someone trained in UX’s best practices. The reviewer will compare the service or interface against the best practices and make recommendations to improve based on those criteria. Expert reviews are a fast way to ensure that what you’re building generally follows users’ expectations and industry best practices

Expert reviews generally require several UX professionals to perform reviews and compare notes while in practice there is usually only time for one person to perform such a detailed assessment.

viii — Surveys

Surveys are a list of questions designed to gather facts or opinions from a targeted list of users. Surveys integrate several types of questions such as text questions about demographics, or first click, or desirability in either the quantitative or qualitative form. Numerous digital survey tools with variation in complexity and features are out there from free Google Forms to SurveyMonkey, QuestionPro, or Zoomerang. The industry reference for usability testing is Caroline Jarrett’s book, surveys that work.

Caroline Jarrett’s surveys that work

ix — Diary studies

Diary studies involve asking participants to record their behaviors or thoughts on a given topic at specific points over time, such as asking people to record the time every time they use a specific app. In a structured diary study, teams can provide the same set of tasks, questions, or guidelines for participants to answer at regular times.

Diary studies can be used for anything from understanding the context of how something is being used in real life, to observing user behavior change over time after using your product or service. Dscout is a mobile tool that allows users to provide data throughout the day.

x — Personas

Personas are used to help describe the different types of users that a company serves. Product teams will perform a variety of research tactics to understand their key user bases and the main differences between their behaviors, goals, and usage. Different personas are users of the product, but they have very different contexts, usages, and goals.

To create the personas, product teams need to pull data from various research sources into a unified storyline surrounding the different sets of users’ skills, goals, environments, key behaviors, and the context of the existence of the product or service in their lives. Then, teams will refer to the personas as they move on to make design decisions, typically creating a document that summarizes the persona’s key attributes.

xi — Co-designing methods

Co-designing methods or participatory design workshops are collaboration sessions between users, designers, developers, and other potential stakeholders where the whole team will focus on creating solutions for predefined problems with immediate identification of user needs and issues, business considerations, and technical limitations.

Choosing Research Methodology — by type

Once product teams have decided that UX research is needed, they need to decide on the proper approach. There are several types of research methods that UX professionals call on, depending on the type of question they’re trying to answer. There’s no right or wrong approach, but to select the most appropriate method, it can be helpful to understand the major categorizations of research.

Choosing research methodology — landscape of user research methods © Christian P. Rohrer 2014
Choosing research methodology — a landscape of user research methods © Christian P. Rohrer 2014

Qualitative vs. quantitative research

Both quantitative and qualitative research are very helpful and used for different purposes but often in combination to uncover hidden trends and their drivers.

Quantitative research produces data that represents numeric information, such as the number of clicks on a certain area, or the percentage of site visitors that fill out a form. Quantitative data serves to produce objective outputs and is not based on subjective opinions. Quantitative research is best at capturing the trends of what is happening and may even be able to get statistically relevant data. Examples of quantitative research could be the percentage of complaints, A/B testing, card sorts, surveys, click tests, and eye-tracking studies. It needs to be noted that one can misinterpret quantitative data if they’re working with a very small sample set.

Qualitative research produces information that can’t be expressed by numbers, such as emotional responses or first impressions. Qualitative research is often used to help uncover why certain trends are happening, and that is why it is usually conducted after quantitative research. Qualitative research is normally done on a much smaller scale because it needs direct feedback from people. Examples of qualitative research are usability tests, focus groups, interviews, diary studies, and participatory design workshops.

Behavioral vs. attitudinal research

While behavioral research is in which teams observe the actions that a person takes, attitudinal research is more about asking people about their opinions.

In the area of UX research, teams tend to rely more on behavioral research because more often than not, what people report in attitudinal research does not match what they end up doing in action. This does not mean that attitudinal research isn’t helpful as companies need to know when a user’s expectations do not match their behavior, how users perceive different brands, expect something to work, or their outlook on potential features. Oftentimes, we need to conduct both behavioral and attitudinal research in combination to get a holistic understanding of the customers.

  • Behavioral research examples — ethnographic studies, usability studies, A/B tests, and eye-tracking studies
  • Attitudinal research examples — surveys, focus groups, and preference tests

Moderated vs. unmoderated research

Moderated research means that teams connect directly with users. This is ideal because teams can ask unscripted questions and dig deeper into interesting habits of conversation, but it can be very time-consuming. However, care needs to be taken to conduct the discussions in an unbiased manner so that participants are not led to answer in a particular way. The most common moderating methods are usability tests and interviews.

Unmoderated research is completed by a participant with no researcher present, such as filling out a survey or trying out a piece of software with predetermined questions. With this type of research teams still have to be careful about crafting the questions so that they don’t create bias.

While moderated and in-person research is recommended when possible as teams can read participants’ body language and find opportunities to dig more into follow-up questions, if the budget only allows remote unmoderated research, then it is certainly preferred over skipping research altogether.

Choosing research methodology — by environment

Choosing the proper research methodology is often dependent on a variety of internal and external environmental factors such as:

  • Organizational structure and development culture
  • In-house vs. consulted work
  • Stage of the product development lifecycle

Organizational structure and product development culture

In traditional companies that use waterfall development mindsets, rigorous research is conducted upfront in the requirements gathering and design phase — also called customer discovery — and then again at the very end of development.

While development teams can conduct rigorous testing at the beginning and end of the development cycle, in agile methodologies, they test consistently but often in much shorter timelines, so sometimes development scopes are generally developed in smaller bits.

In the Waterfall process, research teams have to conduct exploratory research to fully understand users and their needs and since they have time, it is better to use methods such as extensive in-person interviews focused on understanding the goals of the particular brand, or, ethnographic observation of shopping experiences.

In an Agile environment, the whole team gets started designing and building right away, therefore, research teams don’t have the luxury of time to deeply explore users’ needs and goals before anyone starts and generally, research in agile environments can not be extremely thorough. However, the benefits of the agile methodology are that teams can keep doing small chunks of research. For instance, they might do five interviews in the first sprint, start a diary study in the next sprint, and then test an early prototype of the solution in the next sprint. This iterative approach means that teams can get consistent feedback to inform decisions as questions arise, and can constantly validate if they are heading down the right path.

In-house vs. consulted work

UX research can be quite different depending on whether it is done internally or performed by a consultancy firm.

When working internally:

  • Teams can develop deep relationships with users and track their actions and research responses over time. Therefore it is better to choose long-tail methodologies such as diary studies or conducting longitudinal surveys, where the team asks the same questions or examines the same behaviors over a longer period
  • There are high potentials for bias and missed opportunity because as teams work with the same interfaces regularly, they begin to know them so well that they inadvertently, lead participants in certain directions
  • Potential political limits — with the growing intimacy of teams with the politics of a project and knowing what would make the project team and leaders happiest, they may craft test plans that favor biased solutions

To combat biases, teams can employ external researchers who are completely unfamiliar with the project to review plans and perform a pilot test. It can also be beneficial to employ a variety of research methods so that teams do not see the same results over and over again.

As UX consultants, however, teams and individuals are almost always going to be less familiar with the background of the project, which means they won’t be as susceptible to bias as internal teams. However, they are also likely to be required to work under stricter defined timelines and not have access to an existing customer base, so there can be some logistical challenges when finding and scheduling the right kind of participants.

Stage of the product development lifecycle

There is no one formula for selecting the best method at any given time but understanding the stage of product development and considering the goal of each phase can help UX and product development teams narrow down research goals and time constraints, helping them guide their decisions.

At a high level, there will be three distinct stages of product development with different research needs:

Three distinct stages of product development with different research needs
Three distinct stages of product development with different research needs
  • Strategizing something brand new — whether it’s a completely new service or a new feature of legacy software, teams will need to focus research on 1) uncovering users’ needs and goals — focus on utilizing qualitative attitudinal methods, such as interviews, 2) finding room for improvement from users’ current and existing set of solutions — utilize more behavioral methods like a moderated usability test, and 3) validating that the idea serves users in some unique and meaningful ways — use a mix of qualitative and quantitative methods, such as surveys, to get a sense of both scale and some additional context about their needs.
  • Actively designing or building — in this stage, teams will be actively trying to answer whether they are building something right or not. They will need to collect and use information that informs design and development decisions to optimize performance and set development priorities. For this stage, UX research needs to focus on mostly behavioral research such as card sorts, task-based usability tests, and A/B tests to inform decisions along with some attitudinal research such as desirability studies.
  • Evaluating the performance of a live product or service — here, teams will want to focus assessments on summarizing trends and uncovering opportunities and therefore will need to utilize quantitative behavioral research techniques, such as A/B testing or data analytics to understand trends, and qualitative methods, such as usability testing of competitors to uncover opportunities in the business space.

Executing the research effectively

The target participants

Fruitful UX research requires the participants to be a representative sample of the real or target users.

If the project is at the beginning stages of defining its target user and doesn’t have validated personas, then research teams can create proto-personas. Proto-personas are descriptions of assumptions as hypotheses of the target personas and research will focus on validating those assumptions and uncovering additional insights about the potential users. On the other hand, if the project has defined personas to describe the different user types, research will need to find representatives of each of the different groups.

The number of participants will vary greatly depending on the methodology.

  • Qualitative methods such as usability tests can be effective with just a few participants. In qualitative research, teams aren’t looking to determine how many people are experiencing an issue or predict trends but rather to uncover problems and insights and if even a few people share issues, goals, or motivations, it could be worth investigating further.
  • Qualitative methods—can be used when teams are unsure of the scale of the project. To that end, quantitative research will require many more respondents because it aims to measure numerical data, and will need to get many pieces of data to get to statistical significance. There are several online calculators that can help determine the necessary number of participants.

Finding, screening, and scheduling participants

If the project has an existing user base, research teams need to find ways to reach out to them, such as adding a research panel invite on the site or sending an e-mail asking for volunteers.

However, if the project does not have current customers, then participants will need to be recruited and screened using the developed user persona profiles before deep diving into UX research.

Ensuring quality by running pilots

It sounds simple, but making sure you prepare properly for research can help ensure that you get the most out of your sessions. For example, when performing qualitative research, such as interviews or usability tests, you usually recruit only a small number of participants, so having even one session underperform can affect overall results.

In the case of quantitative studies, consistency in execution manner will ensure the reliability of results. Research teams need to make sure that studies are designed well and that the research logistics are smoothed out to get the most out of the sessions. For example, if a question is phrased confusingly, or the prototype is broken, then the research could get negative responses that don’t reflect the questions being answered.

The best method to prevent the mentioned issues is to run pilots of the research on a non-participant, such as a colleague, and to iron out biases, problems, and logistics issues before the actual sessions. Spending a small amount of time preparing can ensure valuable research sessions.

Crafting ‘the right’ questions

Planning the questions that you need to ask to get the most out of your research is one of the most vital steps of any kind of user experience research. Here are some advice:

  • Questions need to be neutral and non-leading — for instance, don’t ask participants how much they like the offering, because subconsciously, that question suggests that participants should like it and people may tend to answer more positively than usual. Instead, ask how they feel about it, without mentioning emotionally-linked words. Another way to prevent forming biased questions is to frame them with negative and positive responses. For instance, ask: “Is this feature helpful or unhelpful to you, and why?”, rather than, “How helpful is this feature?”. In short, make sure you don’t lead participants' answers.
  • Questions need to have the appropriate level of precision for people to answer—for example, when the interest is in the user statuses, such as level of education, or employment, closed questions can be used, which means that you supply a set of answers for participants to select from. Closed questions are also useful when the aim is to gather quantitative data, such as how someone rates the ease of use of a piece of software. Otherwise, it’s best to use open-ended questions, which are by nature more exploratory and allow users to give details and context. It can be harder to analyze the data from open-ended questions, but you get much richer qualitative data.
  • Ask people about things that have happened recently or that they do regularly especially when researching behaviors. For example, one can remember what they ate for breakfast this morning, but they probably can’t remember what they ate for lunch three weekends ago but will feel compelled to make something up. It’s human nature for participants to feel uncomfortable when they don’t know an answer, so aim to give them something that they’re capable of answering. Similarly, people aren’t good at predicting future behaviors but feel compelled to say something that sounds reasonable and makes them look good.
  • Be sensitive to potentially embarrassing or very personal topics — such as finances. People may be reluctant to be candid about some topics, so give participants an easy way to opt out of questions and be careful with your wording. Questions should not seem as though they are passing judgment, try to provide more context for clarity and transparency.
  • Find an appropriate balance of getting detailed information — and not overwhelming participants. There’s no one magical time for each participant to spend, but as a rule of thumb, unmoderated research sessions should be short, potentially 5 minutes for a survey, or 20 minutes for a usability test. If research is conducted too long, participants are likely to become disengaged and opt-out part way through. With directly moderated sessions, aim to cap the session at about an hour.

Asking questions ‘the right way’

When moderating research sessions, teams want to make participants feel comfortable and engaged enough so that they share information.

  • Assure participants — at the beginning of sessions, remind participants that there are no wrong answers, that you value their opinions, and that your job is to uncover insights both good and bad. Be especially careful with the tone and the voice of your instructions and introductions.
  • Ensure that instructions are crystal clear and tasks flow in a way that makes sense to participants — rather than prioritizing the most important area to be assessed first, make the tasks and questions mirror the progression of a process in real life. Participants may provide faulty negative feedback if the flow doesn’t make sense.
  • Break questions into small, discrete tasks — breaking down tasks ensures that participants will consider each component more fully and provide more comprehensive feedback that they may not have thought to share if they were assessing the whole task at once. For example, rather than asking the participants to search for a shirt on an e-commerce website, break that into one task asking to find a shirt in their size, another task for finding their preferred color, and then a task looking for a particular style.
  • When needed, remind participants that you’d like them to think aloud — as they go through the process of interacting with something. It is unnatural for most people, so try giving them a small example. For example, walk them through how you would log into your email account.
  • If a participant gets stuck or asks for guidance, remain neutral and ask how they think they’d figure it out if you weren’t there — if unsure about what the participant said, use the boomerang technique where you reply with a neutral question such as ‘what do you think?’ or ‘what would you try if you were at home?’ If a participant doesn’t ask a question but seems lost, use the echoing technique where you repeat back what they said in the question form. By using the participant’s language, you’re not leading them in any particular way, and replying in a question form makes it clear that the participant should further explain.
  • Follow up with open-ended questions to help participants elaborate — Use the 5-why technique to understand much more about the context that the person is actually in and get much more information about how they choose products.
  • Do not interrupt participants — be comfortable with periods of silence. Humans are naturally inclined to fill the silence, so participants will often keep talking and lead you to information that you may not have even known to ask about. Some of the best qualitative insights come from allowing participants to keep talking. Also, listening closely to what participants are saying allows you to come up with follow-up questions that dig deeper or wider than originally planned.
  • There is no need to stick exactly to a test plan — go off-script and uncover information that you would have not otherwise gotten. Have a team member take notes when running moderated sessions so that you can completely focus on listening and crafting deeper questions. Read body language and dig deeper.
  • Focus on taking notes that record the takeaways that relate to the stated goals of the test — if you’re not able to have a note-taker, record sessions so that you can refer back to them later.

Analyzing and presenting findings

Assessing data for insights

Once a research round is complete, teams will need to know how to interpret the data, uncover meaningful and actionable insights, reinvestigate hypotheses, and make recommendations. Each research methodology has particular data analysis procedures, but regardless of the type of research, here are some general tips, and a rough process to ensure that teams get the most out of their UX research data.

  • Gather and organize the data — for example, export the raw data and clear it up for further analysis.
  • Look at the full scope of data before making any conclusions — when analyzing research, make a big spreadsheet with a row for every participant. Include their general demographic information, the notes from each session, and links to any other files or information about the research. Having one big overview helps take a look at the big picture of the possible insights.
  • Breaking down the huge amount of information — mining notes for facts, quotes, or points, that relate to the key goals of the research.
  • Using other team members to observe the sessions — including everyone helps to make sure that all parties are invested in the process, understand the full breadth of work, and ensure that insights are not missed out on. Have breakdown processes occur in the debriefing section with as much of the project team as possible. Remind each team member of the key goals and hypotheses of the study, and have everyone mine their notes and write up the main things they observed.
  • Organize and categorize uncovered notes and insights — look at each of the key points you have identified and sort them into the predefined goal categories. If you’re able to do this with the team, take the time to discuss why each finding is important, what it means in the context of the project, and potential solutions or recommendations.
  • Map main takeaways and findings across two main dimensions that are important to the project as a high-level summary — such as, for example, 1) impact to users and 2) impact to the business.

Presenting and implementing findings

After the analysis of research data, teams need to come up with the key takeaways and suggestions, share insights, and ensure that recommendations are implemented.

  • Find a format that will work for your particular team to share learning — try to include the team in the process of performing and analyzing research, and share knowledge in ongoing discussions. Record key findings in a shared team wiki document and keep some sort of documentation of the main study details.
  • Build and submit a detailed and formal report as a summary deliverable and to document findings—create an executive summary of key information and findings, include a mixture of visuals and text to appeal to the different ways people interpret data, create a simple spreadsheet of findings and recommendations to give readers a quick way to visualize highlighted takeaways.
  • Schedule a whole team discussion of takeaways and their implications — take at least 30 minutes to go through each of the key insights you uncovered. Discuss what you observed, why it matters to the team, and what the team should do

When UX research is embraced, conducted effectively, and embedded in the product or service design and development process, teams will understand their users better, designs will be more successful in serving needs, and teams can better hit their business goals.

--

--

Nima Torabi