Undemocratising User Research
Last week, one of my ex-colleagues from Uber, Eduardo Gomez Ruiz wrote a thought provoking piece on Training an Army of (Non) Researchers aka how to democratise and distribute research skills across an organisation by leveraging some research champions. You would do this to allow a global workforce to get faster access to user inputs and build that feedback into product and operations decisioning loop.
He made some excellent points on how to do it well. His provocation tempted me to build on his premise and offer a counter narrative. The intention of this post is to generate more discussion among practitioners and organisations about what is the right type of insight your organisation needs and whom should you consider to deliver it.
I built the first pilot of Research with Operations teams, before involving my teams in different variations of the original idea to see if we could create the perfect format for democratising research. The intention was to enable other teams to become more confident in listening to users and do this regionally as a centralised research team could never solve all the local issues that existed at every city level.
Long story short, we did not succeed in making insights generation a fully democratic process. It did not live up to the expectation not because people did not think through the problem or did not give it their best shot. It failed because no matter how much you plan and organise, high quality insights require much more than a faster insights gathering process.
So what were the problems with our efforts to democratise research?
It took more time and effort to deliver than the value it delivered to the organisation.
We spent about 1.5 years actively tweaking our Research Champions program pairing different champions with different types of qualitative research in terms of method, scale and complexity. We were able to build a solid training module to expose non-researchers to the why and how of research.
However, it became clear very early on, that while a lot of people in today’s interrelated job families like Marketing, Customer Operations, Design and Product Management can ‘talk’ to people; not many of them had the appetite to adapt to the craft of research — creating the hypothesis, writing the research guide or thinking about an analysis framework.
A researcher is a dynamic thinker who has to adapt their methods and questions based on who is in front of them, how much they have already learnt and what new areas could be probed on. This did not happen. We got a lot of verbatim and videos which after a point became repetitive and did not add more to the analysis. This then led to analysis paralysis. While it was exciting to be able to research in 6–10 different cities in a week and get feedback, in almost 100% of the cases our teams were unable to analyse the whole data effectively or they left out lots of things to be able to make the final analysis, synthesis and storytelling manageable.
In several cases, the volume of the data added weeks to our deadlines or people worked over time to close it. At the end, two things happened, the ambitious were burnt out and the less ambitious, lowered the quality of their output or just gave up. None of these are outcomes you would want in your organisation on a regular basis.
After a few initial pilots, I ended up questioning the scale of what researchers were proposing to do with non-researchers and focused their efforts more towards sound checking simple things like testing our user flows in different markets. No foundational insights that would fundamentally lead to new thinking came from this exercise.
As companies globalise, the need to ‘hear more’ from users is tempting but without the right amount of time and the thinking behind it, unleashing a crowd of people to go talk to users is not the answer. More data is not equal to better data. You hear the same things again and again. This is why crowdsourcing platforms for innovation have never fully taken off and true innovation management remains the work of smaller groups who can deeply engage with a problem space.
The majority of the insights helped short term fixes than generate game changing ideas.
If I reflect on what types of research we conducted with our cross-functional partners which were actually successful, it was almost always the tactical ones. Whether it’s a project we did around data analysis of churned users or how to change something in a benefit scheme, the more tailored the question to the non-researcher’s core skill sets, the better was the analysis and the final output. Fixing short term things is a goal in itself and when we think of democratising research, we must acknowledge this upfront.
Every single one of the non-researchers were experts in their own area and it is natural that they would do well with topics that aligned with their prior knowledge. Combined with a limited amount of time they would have to explore their research interest, it was normal for people to fall back on their strengths rather than unlearn and then relearn the actual skills of a researcher. So what we really got was not a new pool of researchers but an access to some skill sets which we would normally not have got if we went via the typical prioritization route for their time and skills.
What the attempted democratization process really showed us is as companies we are becoming so specialized in incremental growth hacking that getting a diverse group of people’s time and brain space to think of an ambiguous problem is almost impossible and seen as a wasteful time investment. How this impacts a company’s ability to stay innovative for the long run, should be something that we all need to think about.
It distorted the type of research we did.
Once we shared publicly that we had built a framework of research with non-researchers, the idea seemed so attractive to most of our stakeholders that they constantly asked us to do more of this. The amount of testing needs skyrocketed and they were almost always aligned with a launch P0. As a research manager, I tried to keep 30% of our team’s time for foundational questions but actually being able to do that was hard, if not impossible at times. The hard lesson I learnt at this juncture was democratising research can also get us very quickly to a state of mind where insights are seen as on demand and researchers as individuals who are rotatable on a project basis. This is not good for the future of our discipline. The whole point of UX Research is to develop deep thinking than fast or wide thinking. In today’s world, we have other ways to do fast and wide — look at data for instance or do social media scraping.
I have learnt that trying to be a good partner to Product and aligning on our shipping priorities is not how research should identify its own success. If testing the quality of what you ship is important then we need to level up our quality process. Research could have a bigger role in what great Quality Assurance looks like but we should not become synonymous with that. This is the risk we run when we raise armies of non-researchers to support a burgeoning need for research.
Is democratising research a bad idea?
I will go out on a limb and say yes, democratizing research is largely a bad idea. Just as growth hacking is the death of true, meaningful product thinking, trying to democratise research with non-researchers, is the end of meaningful, high quality, direction changing understanding of user behavior. It may be somewhat helpful when you are a tiny startup but no sizeable organisation should over scale their research intent.
By enabling others to do ad-hoc ‘research’, we make our discipline cheap. As we agree to train others in research in the interest of being good partners to product or design, very little good comes out of it for our discipline in the long run. I think we have some learning to do here from Engineering where their specialized knowledge is valued for what it is.
What we really need to do at this moment as UX Researchers, is to split research into 2 clear tracks — Opportunity Discovery and Testing. As a UX Research Lead, I would like 80% of my team’s time to go towards discovering the right opportunities. This will require UX Researchers to enhance certain skills — do more literature reviews, get better at identifying and tracking trends over time, improve one’s story-telling and quantitative market sizing skills to be able to own the full narrative around an opportunity.
It will move us away from usability testing. In a world where we have both stable and sophisticated design systems and mature usability best practices compared to the 80s and 90s, we need to rethink the what and how of testing and find more suitable tools or teams to do this.
This is where I think UX Research has an opportunity to play a closer role with Customer Operations, Quality and Localization teams. We have an opportunity to define how to test products not just for bugs in the code but also for levels of usability. A mature product organisation needs to establish some common usability parameters. Once a core usability framework has been created, this can then be leveraged in different geographies, across different launches, to whatever extent of granularity and frequency an organisation may want.
By splitting UX Research from the scalability question, we help organisations become more principled in decision making, while creating a more meaningful work environment for researchers.
First, we make everyone in an organisation responsible for taking good decisions by leveraging collective experience more effectively. By forcing teams to use their common sense, design patterns, well documented best practices and domain knowledge, we need to unlock the creativity of a group rather than over index on any one function. We need to encourage teams to be more synthetic than reinventing the wheel each time. Great synthesis across functions can only happen when the amount of data is meaningful and manageable. Too much data is again not great data and it is definitely not easy to work with.
Secondly, by focusing on fewer but critical challenges, we help organisations scale research internally, meaningfully. This way we are not hiring talent on talent and putting them to work on microscopic issues that do no justice to their degrees, skills and experience.
Lastly, by both reducing volume and prioritising the right questions, we help researchers gain back some control over the type of research they should be doing. We encourage them to think and give them the time and space to do so. This is why they were hired in the first place.
So, will I try democratising research again?
Yes, it’s worth attempting if the focus is on using well established heuristics to sound check day to day ideas. Yes, if I work in a small start up where dedicated research is not an option.
No, if the aim is to get better at identifying long term opportunities. No, if the plan is to truly understand unique user behaviors and create product differentiation.
Other teams are better at scale than UX Research. We are better at meaning, relevance and being forward thinking. Let’s do more of that.
Thank you:
Molly for being an amazing critique.
Eduardo for sharing your thoughts so I could build off it.
Photo by Sasha Freemind on Unsplash