Rolling Research: a minimalist UX research process
A practical guide to using a lightweight recurring research process.
Rolling research is a lightweight process for conducting frequent regular research on a common theme. It’s quick ’n’ dirty. And it works (with some limitations, more below). I developed this process with a Product Manager and Product Designer when we embarked on developing a major new feature called learning paths for our B2B product. (Learning paths is a feature that allows our users to build curated, ordered lists of learning content for their teams.)
We had an ambitious (aka crazy tight) deadline for the launch of this new feature. We needed to figure out how to test our prototypes with customers, iterate on their feedback and test the next version of our prototype (and rinse and repeat) as quickly and as frequently as possible.
Those of you experienced in user research, especially with B2B customers, know it takes a lot of time to plan, execute, analyse, synthesize, write up and share research. Recruiting customers for B2B research alone can take weeks! And we were aiming to run moderated users tests with customers on a revised prototype every week. Holy sweet potato!
So we had to strip back our research process to its bare bones. Think of this as minimalist research. We jettisoned any part of the process that wasn’t absolutely essential. We adapted other parts of the process to make them more lightweight and flexible. Throughout we were learning from our mistakes, adapting and improving the process as we went. In the end, our new product feature launched successfully and we continued to use Rolling Research for evaluative testing post-launch.
Our minimalist research process is mash-up of conventional moderated user testing, semi-structured interviews and the RITE (Rapid Iteration) method. We cherry-picked tools and techniques from these methods that best met our needs. If you’re not familiar with these methods (or need to refresh your memory), you may find these resources helpful:
- Moderated user testing: Usability testing: what is it and how to do it? (a short article by Leonel Foggia)
- Semi-structured interviews: The 3 Types of User Interviews: Structured, Semi-Structured, and Unstructured (a short instructional video by the Nielsen Norman Group)
- RITE Method: When RITE is right (a short article by Sara Mansell)
Here’s what we learnt along the way.
Rolling Research Tools
We stripped back our research toolbox to two documents that we used throughout this process:
- The Test Script: We created a template script that we used in every round of research, adapting it to reflect the focus of each round. We created a copy of this script and adapted it for each round. Then we created a copy for each test and took our test notes in the script. The script contained:
- Date of the interview
- Name of the participant, their organisation and job title
- Links to references: the Findings Spreadsheet (see below) and the test recording
- A summary of the background of the research and the research questions (because we didn’t have any other documentation where this was recorded)
- The script questions grouped by theme. Week to week we opted to include or exclude certain questions under each theme but we always asked something under each theme.
- The Findings Spreadsheet: This was a Google Sheets spreadsheet where we recorded the findings of each test. This became our record of the research findings. When we ran an Observation Room, where other members of the team observed the tests, we collaborated in filling out the Findings Spreadsheet together. For each test we recorded:
- the participant’s name (linking to our Test Script with our notes from that test)
- the date of the test
- The participant’s organisation (with details such as sector, size and location)
- Some more details about that participant’s role with our product
- A link to the test recording
- A column for each of the themes in the script where we recorded the participant’s responses related to these themes
We used Gmail, Google Calendar and Calendly to manage inviting and scheduling participants. We used UserTesting.com or Google Hangouts (recording the tests with Quicktime Player) to record our tests.
Features of Rolling Research
To keep this research process as lightweight as possible we
- Kept the focus of the research to a fixed set of themes for all rounds. The focus sometimes changed from round to round but the broad themes were the same. This also allowed us to synthesise our findings between rounds.
- Kept to a fixed schedule We aimed to research every Friday. That meant we could recruit customers for research and book rooms and other resources potentially weeks in advance. We did not know *what* exactly we would be researching very far in advance, we just knew that we would be researching *something* every Friday. You may need to experiment to figure out what day works best for you and your customers.
- (Nearly) All research work happens on research day: Each Friday morning we set the goals of that round and adapted our script to reflect these goals. In the afternoon we ran the tests followed by debriefs in the Observation Room. Later on Friday afternoon (or in the final debrief of the day) we synthesized the findings and agreed the next steps. There was a little bit of recruitment and scheduling work (e.g. sending invitations and reminders to participants, handling late cancellations) on the days in between, but the vast majority of the research work happened on the research day. (Of course a huge amount of design work happened on those days between research days as the findings of the previous round of research were baked into a new prototype in time for the next round)
- We stuck mostly with one method throughout: moderated user testing with some semi-structured interview questions. We did dabble with other methods but this was less efficient because of the time required to adapt the documentation and other aspects of the test setup.
- Stripped back the documentation to a bare minimum
- We used the same test script (see above) throughout (hence the importance of sticking to the same set of themes), adapting it a little for each round.
- We recorded the findings and the analysis and synthesis of all rounds in a single spreadsheet (our imaginatively-named Findings Spreadsheet (see above)).
- There was no other documentation of the research other than the test notes and the Findings Spreadsheet — no reports, no presentations
- Used Observation Rooms
- These were rooms separate from the tests where other members of the team could observe the tests remotely but in real time.
- We created an Observation Room Guide that set out for observers their role and observation room etiquette. The guide included
- About this research: A brief description of the research and its goals
- Why You Are Here: A description of the observer’s role
- Rules of the Observation Room: e.g. do not talk or distract other observers during the tests, the rule of two: commit to observing at least two tests,
- How the Observation Room Works: A brief description of what happens when
- Observers learnt from the participant’s feedback in real time. Immediately after each test we ran a debrief session in the Observation Room where we documented the findings from that test together in the Findings Spreadsheet.
- While Observation Rooms are not a feature unique to our Rolling Research process, it did perform a unique function in rolling research: sharing the findings with our wider team immediately (remember, we didn’t have any other reports or presentations of the research).
Strengths of Rolling Research
- Lightweight, streamlined
- Adaptable (somewhat — see limitations below)
- Facilitated very frequent connection with our customers which gave us confidence that our designs were meeting their needs.
- Made recruiting our B2B customers easier: we effectively had a much longer recruitment lead time as we could recruit up to weeks in advance to fill our Friday research slots.
- The Observation Room promoted inclusiveness and developed a shared understanding of our customers’ needs. This is a great way to actively involve the wider team in research.
Limitations of Rolling Research
- Very demanding of time and resources, sometimes exhausting, especially at a very frequent cadence like weekly. Unless you having a pressing need, I would recommend you experiment with longer cadences e.g. every other week (which we switched to later in our feature’s development)
- Requires a team that are enthusiastic and experienced proponents of UX research. Thankfully I was working with a designer and product manager who both were both. This is not the method to use to train UX research rookies.
- Limited adaptability: while we adapted the focus of the script round to round, we didn’t change the method or the themes or the product we were researching from round to round. To do so would have required so much effort as to defeat the advantages of lightweight research.
- If you plan to synthesise your findings between rounds, don’t leave it too long between rounds to synthesise — you’ll simply forget the nuance and details of the tests. Plan for when you’ll synthesise between rounds.
- Very lightweight documentation means it’s trickier to share the research beyond those immediately involved in the research. Interview note and Findings Spreadsheets are invaluable to this process but they aren’t ‘pretty’ or immediately engaging.
- Recruitment is always tough, especially for niche participants, and in some rounds we simply couldn’t recruit enough customers so we ended up with one or no customers to test with on some research days.
- We limited ourselves to a maximum of three interviews per day, which is fewer than you would schedule in a ‘full-sized’ moderated user test project. This was due to time (and stamina!) and recruitment constraints.
- Though I haven’t tried it, I don’t think this process would be particularly useful for purely generative research. I believe that it’s best suited to evaluative methods like moderated user testing where the findings are going to be used to make immediate decisions about the product and the impact of those decisions needs to be subsequently evaluated. This is my opinion, your mileage may vary.
Other Versions of Rolling Research
In the course of researching how other researchers and organisations have approached challenges like those that inspired our Rolling Research, I found it interesting that similar processes have evolved elsewhere, right down to the name. For example:
- This article Rolling Research: Keep the Insights Coming by Akilah Bledsoe (Research Program Manager at Facebook), Jamie Kimmel (Researcher at Facebook) and Beth Lingard (Research Manager at Facebook) describes a very similar process to the one I’ve described here.
- Answerlab proposed a similar approach in their article, Making the Case for Rolling Research.
- Ben Wiedmaier, in his article Rolling Research: Your Key to Creating a Culture that Incorporates User Insights describes the ‘always on’ research programmes at GitHub and Facebook (the same one as described above).
Similarities between these processes and our rolling research process:
- Set up a regular cadence of research that facilitates planning and recruiting
- Used similar questions, deliverables and strategies from round to round to save on the overhead of developing these afresh for each round.
- Used lightweight templates for research artefacts both to save researcher time creating these from scratch and to facilitate non-researchers in scripting their research questions and taking greater ownership of research.
- Involved stakeholders in the research to mitigate the risk of nuance being lost in the more lightweight reporting
- Acknowledged that this is a ‘scrappy’ process. It is suited to some applications but not to all. It is not a complete substitute for more conventional research methods.
- Acknowledged that this process still required significant research ops overhead to sustain it.
The main differences from our process are:
- Used for regular, on-going research (not to meet a specific, time-bounded goal as we did with our product feature launch)
- Changed the product and/or the topic of the research from round to round (we focused on just our product feature as it evolved). Because of this they focus on general participants (rather than niche participants), whereas we focused on the niche participants to whom our new feature was targeted.
- Used for purely evaluative methods (while we did include some generative questions in our tests)
Have you evolved similar processes in your team or organisation? What have you learnt along the way?