How to do UX research When There’s No Time or Resources

High-impact & low-cost user research as the sole UXer

Richard Yang (@richard.ux)
UX Collective

--

Image of UXR diagram from UXR to define to ideate.
Cover image from Denis Bolshakov on behalf of FlowMapp on Dribbble.

It can be daunting at times when joining a new project as the sole designer on the team. Who’s going to handle all the design and research? You.

To make matters worse, most companies also don’t have a dedicated user research team or a process either.

Flowchart diagram of the design process from a non-designer’s perspective. Design > code > ??? > profit.

You may find yourself sitting in a kick-off meeting watching someone give a passionate talk about “exponential growth” and “projected profits”… whatever those are.

So it shouldn’t be surprising when stakeholders remark, “Oh, we don’t do user research here. We don’t have the resources for it — but you’re free to conduct some on your own time if you want.”

Given the limited resources at your disposal, it’s important to have a toolkit of high-impact user research methods that cost less and take less time compared to traditional alternatives.

In the previous role I held, I was the sole designer owning the entire experience for a logo maker product called Hatchful. Since we didn’t have a user researcher to support our UX decisions, I had to figure out a low-cost, high-impact solution that worked for the team.

Tom Hanks holding skates looking confused with a toothache.

Without user research, designers must make important design decisions based on intuition and assumptions alone. This is not ideal as waiting until release to get feedback could waste resources, lose trust, and lead to irreversible mistakes.

Arin wrote a great article about the importance of user research, but this diagram from my old ergonomic seminars sums it up quite well:

Diagram showing proactive and reactive validation and how it impacts costs.

User research begins with three steps; steps 4 & 5 will be covered in a future article.

  1. Figure out the research question we want to be answered
  2. Figure out which research method best answers the research question
  3. Recruit participants
  4. Conduct research
  5. Examine findings & create actionable insights for the team

A common mistake I’ve seen is people rushing headfirst into user research without establishing a clear research goal first.

The research goal should aim to answer a specific question and test a specific hypothesis. A good structure for this is the “if-then-statement” “if [the independent variable] is true then [result (dependent variable), because [rationale].

For example, If we change the CTA button color on the landing page from white to blue, then we’ll have higher click-through rates because the button will be more prominent.

Good research questions

  • Which landing page design has a lower bounce rate, and why?
  • Who are our users? What are their needs and pain points?
  • Why are users getting confused at this step in the user flow?

Bad research questions

  • Who are our users? Why aren’t they giving us money and getting confused in steps 3, 8, and 15?
  • How can we sprinkle some magic UX dust and make more money?

I would suggest all designers expand their UX toolkit past the standard user testing methods.

Start doing research for important user-focused questions like, “what features do our users want,” “how do our users' group content together in their minds,” or even, “who the heck are our users” (plot twist: it’s not us).

I can’t count the number of times I’ve seen designers download a persona template, fill it in with random assumptions, and be done with it all.

A persona image from Dribbble.
David looks like a workaholic because he wears a suit, source: me (source)

Once the team aligns on a research question worth answering, we’ll need to figure out which user research tool is best suited for our needs.

Picking the right UXR method

A diagram of UXR methods based on the various attributes of each.
Source = Yes, there’s a LOT of them

When breaking down the methods, it’s important to understand that user research can be either generative or evaluative.

Generative methods explore a problem space and learn about potential users.

Evaluative methods validate design decisions and measure impact.

Avoid jumping the gun and using evaluative research methods to test a new solution. Attempt to conduct generative research first to understand who the users are and their goals.

Depending on the type of research you conduct, I would suggest collecting either qualitative (e.g. opinions), quantitative data (e.g. time spent on task), or a combination of both. When conducting research it’s important to avoid unconscious biases, understand the user’s mental models, and remember that research is only valuable when shared and converted into actionable steps.

For execution, we can opt for either remote moderated research, remote unmoderated research, or on-site moderated research.

A good rule of thumb is to use unmoderated research methods for evaluative testing (e.g. “does design A make sense” or “is design B better”).

I’ve put together a table to match research questions with their corresponding methods:

Table of UXR methods and when to use them.

Don’t be afraid to try out new research methods, you can’t just usability test every scenario.

UXR method examples

Interviews

There are three types of user interviews: (1) directed interviews, (2) non-directed interviews, and (3) ethnographic interviews.

In the interest of time, I recommend interviewing participants through a remote tool. However, if there are participants representative of the target user group within a reasonable radius, in-person is preferred.

You don’t need to create a script, but it’s imperative that we’re prepared with both direct and indirect questions that will provide insight into the team’s research goal.

I’ve found transcribing interviews into long unreadable reports to be a waste of time. I recommend listing out the main points from your interviews and adding tick marks next to the points that come up often.

Afterward, I like to provide an overview of overarching themes that came out of the research so other team members can get an overall sense of the sessions without being present.

Card sorting

This research method involves having the user sort a set of terms into categories to understand a user’s mental model. This method is often used to validate or get initial ideas on how to structure a product’s information architecture (IA).

Card sorting can be “open” or “closed”. The findings provide valuable insight into how the user perceives the relationship between content, which can inform design hierarchical design decisions.

It’s less time-consuming to conduct open card sorting in the same interview session. In general, you’d want to aim for 15 participants and around 40 cards. Here’s a great resource to get started.

User testing

This is the most common form of user research, which involves asking the participant to go through a series of guided tasks, and having them “think aloud”.

A good prompt I use to help with this is “imagine your brain is on speakerphone”.

These tests are great because they produce specific results that lead to actionable changes.

If the team is having issues with stakeholder alignment on a particular design decision, inviting them to a session can increase their enthusiasm and help them understand the value of research.

There are three kinds of user testing: moderated, unmoderated, and guerrilla testing. Unmoderated tests are the method I most often use; www.usertesting.com has been the best resource. It’s helpful to have these tests run in the background while I focus on other design duties. After the tests are done I can compile certain snippets into a highlight reel to share with stakeholders & team members.

A test protocol involves screener questions, user tasks, and instructions — all of which can be done in under an hour and is reused for future tests.

Comparison of various UXR software.
Source: Paul Veugen

Guerrilla testing is often conducted when the team needs quick validation or some initial ideas. This lightweight method involves a 5–10 minute unscripted walkthrough of a design with a coworker, friend, or stranger.

Testing low-fi wireframes might work for some, but from personal experience, the participant gets confused and end up asking questions like, “why is this in black and white”, “I don’t like how this looks”, or “how did you get into my house”.

It’s worth spending 30 minutes making the design a little more presentable with real content (don’t even get me started on testing designs with lorem ipsum).

AB Testing

AB testing involves testing two different designs with two sets of users (around five each, as this uncovers most of the UX problems—further participants result in diminishing returns).

This is best used when you want to make sure your new design is better than the existing design (aka the “control”). Be careful not to do AB testing at the beginning of the process, as the team could just end up climbing local maxima.

Diagram of local maxima.

Other user research approaches might be less feasible given their intrinsic time and cost constraints, but this doesn’t mean we shouldn’t experiment with them. Given the details of a particular project, some methods are more feasible than others.

You can also get creative when attempting to answer specific niche research questions. Just because a particular UX method isn’t well documented doesn’t mean it's not viable (assuming we avoid the common bias pitfalls).

Back when motion movement tracking software was expensive, I ended up projecting a website design onto a wall and having the participant wear a hat with a laser pointer strapped to it. I didn’t learn of this method in the books, but it ended up being what we needed to align on a decision at the time.

Recruiting participants

Once we have our research question and research method, we’ll need participants to work with.

The number of participants varies depending on the method being used. For instance, user testing can be done with just five users per test, while card sorting requires much more participants, up to 20–50.

Assuming the team doesn’t have a dedicated user research team to help with recruitment, here are a few low-cost approaches:

Method 1: Recruit visitors from the product’s web properties

Using tools like Qualaroo and Intercom will allow us to show visitors a screening questionnaire and collect emails from interested participants.

It’s important to have a quick call with prospective participants and confirm the details of the test with a calendar invite. Doing this ensures the prospective participants are able to articulate their thoughts well out loud, and can also reduce the chance of them not showing up.

Meme from Office Space “I was told there would be users to test”.

Method 2: Recruit from a subset of the mailing list

You can also email a subset of the mailing list to find test participants. Sometimes I respond to support tickets and emails, asking if the user who logged the complaint would like to participate in some research related to their problem.

Method 3: Leverage the support staff

Your customer support team interacts with users the most, therefore it makes sense that support team members could be excellent candidates for “user interviews”.

I’ve found support team members to provide excellent insights on actual user problems and pain points based on their extensive experience interacting with them.

How often to test & how representative do participants need to be

How frequently we should perform user research and testing varies depending on the project and constraints. In Steve Krug’s book, “Rocket Surgery Made Easy” he recommends testing at least once a month with all stakeholders involved for an entire morning.

That’s a great rule of thumb, but there are also additional factors when deciding whether or not something should be tested.

A heuristic I use is: how reversible the decision is, the degree of impact, the number of users it impacts, the mitigations that are in place if something goes wrong, and the consequences of delaying a release.

However, when running a design sprint or working on a new project from scratch, a good guideline to follow is: “test & iterate more often at lower fidelity with more general participants, and test & iterate less often at higher fidelity with more representative participants”.

When the teams are still focused on quick low-fi iterations, most of the problems can be identified with a more general participant pool.

On the flip side, when moving towards hi-fi iteration work — the remaining problems tend to be identified when testing more representative users who might have specific domain knowledge or real-life use cases.

Diagram illustrating how often we should test & with how representative our users should be.

After conducting user research and receiving the results and findings, we need to communicate them to the team in order to create actionable tasks to improve the product.

If nothing gets done and nobody understands the findings, then research is useless. Here’s a great article about tailoring your findings for different audiences.

https://ambitiousdesigner.substack.com/

Ready to level up your design skills and reach your full potential? Subscribe to The Ambitious Designer newsletter for weekly doses of UX insights, frameworks, and practical career advice.

--

--