Building a framework for prioritizing user research
In September 2020 I joined Pendo.io as the first UX Researcher, with the goal of establishing user research as a separate discipline. I was extremely fortunate that there was already a strong culture of regular customer engagement and a deep understanding of the value of both qualitative and quantitative data. Given that user research activities were already being performed, as the sole user researcher it was important to define what my role would be relative to the rest of the team. Clearly, it wouldn’t be possible for me to conduct all (or even most) of the research. As someone who’s not terribly good at saying no, I needed a framework to help me decide how to focus my time and energy.
There were multiple iterations leading to this simple 2x2 matrix. The shared theme among each iteration was the role of researchers to mitigate risk and bring clarity to the particularly opaque problems of our industry.
After a year of testing this framework “in the wild” — sharing it with colleagues internally and beyond — I felt confident enough in its merits that I wrote a Productcraft blog post and then later posted the framework to LinkedIn.
As someone who is not particularly active on social media, I was overwhelmed by the amount and depth of responses to the LinkedIn post. Feedback from the comments indicated that the framework uncovered a shared pain in the community, and a need to identify a practical tool to address this pain. Researchers are wildly under-resourced, causing us to survive by mastering the fine balance of democratizing our practice and executing primary research on the most strategic projects. This framework attempts to help navigate that fine balance.
As much as we might like to be able to approach every project with the highest standard of rigor, the reality is that, in industry, we frequently make resource tradeoffs, and research is no exception. As Marc Hébert elegantly stated, a framework like this is intended to be used as a “conversation starter for what is ‘good enough’ or ‘highly rigorous’ or wherever the felt, imagined and real needs of the team to do the research.”
By way of expressing my gratitude to the many, many individuals who took the time to provide critical feedback, ask thoughtful questions, and confirm the utility of the framework, I wanted to share a summary of the responses below. It is my intention to incorporate this feedback into a revised version of the framework. However, I want to also acknowledge that each consumer of this framework operates in a unique environment, and the specific axes and labels that resonate best may vary from one organization to another. In that spirit, I offer the various alternatives to the shared community to use and leverage as each individual sees fit.
Questions, Clarifications, and Suggestions
Several people asked how I define “risk” in the framework. It specifically refers to the risk of getting it wrong. There are many types of risk, but rather than focus on the specific type, I’ve found it to be a better litmus test for resource allocation to ask “what’s the impact if we get it wrong?” This framing helps us to understand how “right” we need to be on our first iteration (because let’s face it, nothing we do in software is ever really “done”).
Nan Wilkenfeld asked the astute question of where “socially-responsible design principles” are positioned with regard to “prioritization decision-making” frameworks. My (perhaps naive) hope was that the risk of getting it wrong would incorporate this particular element. However, Nan wisely points out that, in practice, the interpretation of framework labels is ultimately driven by organizational values. Defining “wrong” is a horribly subjective task that requires additional structure and additional checks and balances at all levels within the organization to prevent misuse.
Another common thread was around “who” should use the framework, “how” and “when.” The answer to all of these questions is “it depends” based on the organization’s culture and resources. However, in most organizations today, the most effective approach for any decision-making is collaborative. The researcher might drive the conversation, navigating the team through the framework. But, no one person on the team has the entire perspective. Leveraging the framework to have the conversation among a diverse, multi-disciplinary team will yield the best results, in particular when interpreting risk.
That being said, the framework can be used at any point, or at multiple points, during a project’s lifecycle. As the project evolves, problem clarity and risk may increase or decrease. When major shifts occur the framework can be used to readjust resources. As Sam Satie Salman observed: “So perhaps the differentiating aspect is not in terms of intensity but rather in terms of focus or starting point.”
Several comments noted that there might even be a fairly predictable cycle (or set of cycles) that could be mapped to the framework, and I’ve definitely experienced this to be true. Aga Szóstek elegantly stated: “I think that heavy research and design has a chance to bend direction and focus, while shipping and measuring cements the chosen direction. So, perhaps these should not be dimensions but cycles?” The next iteration will incorporate this feedback!
There were also questions around whether there is ever a time when research “isn’t needed” or quadrants where research resources wouldn’t be utilized. My sense is that it depends on how you define “research” — in particular, as an activity or as the discipline / role. Research (in varying forms and with varying degrees of rigor) occurs across projects and throughout product lifecycles. The intent behind the framework isn’t to prescribe whether research is conducted or by whom. It’s meant to facilitate evaluating how much rigor is required so that appropriate resources can be allocated. For example, in the “ship it and measure” quadrant, research might be conducted by way of implementing a new feature or design (not necessarily A/B but it could be) and measuring the outcome against goals.
Many of you commented on the quadrant names, and I will be the first to admit the they are less than ideal. Chris Jackson summed it up perfectly by saying that they are “in danger of being perjorative.” Additionally, Michael Oberle made the great observation that “Research Heavy” (or its equivalent) should be in the top right.
An alternate approach offered by Chris Jackson and David Munoz might be to approach labeling from the perspective of the driver is for each quadrant.
![An updated version of the 2x2 framework that incorporates suggestions from readers for new names for the quadrants.](https://miro.medium.com/v2/resize:fit:700/1*w2qDPdIR7an40_hTUaJIAA.jpeg)
There was also discussion around the “Problem Clarity” axis label. Olwen Puralena suggested “Ambiguity” and Michael Oberle offered “Unknowns” as alternatives.
Rob van den Tillaart further suggested the need for a third axis: Urgency.
While many readers commented on the value of 2x2 frameworks, Bengi Turgan Çifçi made the observation that perhaps it’s misleading for the quadrants to be equal proportions: “For the left hand side where the risk is low, it means that there’s some sort of data-information available at hand. So the lower left quadrant would be smaller than of the right one. (More like a rectangle) Risk is where we lack info. Info is where we have if even blurry, some sight of the problem.”
Other Frameworks
Each of the frameworks below intend to address many of the same challenges that I am through slightly differing approaches. I have not yet had a chance to digest each of these yet but look forward to doing so before revising my own.