Scaling research at Dropbox

Responsible democratization: how the Design Research team at Dropbox empowers cross-functional partners to conduct competent research

Christopher Nash
UX Collective

--

A series of 7 pieces of popcorn, arranged from the smallest unpopped kernel to the largest, fully popped kernel.
Photo by Kelly Arce

A brief history of Design Research at Dropbox

When I joined Dropbox in the summer of 2014, the research team was a couple of months old, and there were just four of us. We spent the first few months helping product teams learn what research is and how it could help them make better decisions. We looked for or created our own research projects wherever we saw opportunities for impact.

As word got out and we started to prove our worth, teams began asking for research, and we worked in a client-services model for a time. Teams would request our assistance, and we would evaluate and prioritize. We assigned a researcher to projects that showed the most promise for research impact.

As soon as the team was big enough, we began to specialize, embedding researchers in product areas and product teams. The embedded model came with significant benefits. Researchers built ongoing relationships with their teams, developing mutual trust and respect, and they were able to develop deep domain knowledge. We didn’t have to burn time ramping up on a new product area for every project. Being embedded with teams also meant that we were in the room for roadmapping, planning, and strategy conversations. We were able to advocate for users on a daily basis and reiterate the lessons we’d learned from previous research.

As product teams learned the value of research, they began to rely on researchers more and more to inform product decisions and, eventually, for innovation and strategy. Now they wanted to put everything in front of users. Within three years of our start at Dropbox, product teams felt they couldn’t function without us. The demand for both foundational and evaluative research was outstripping the capacity of our team.

Today, we have 28 researchers and five research operations heroes. (Yes 👏 they 👏 are 👏 heroes 👏! We could not function as effectively without their support and expertise.) Of course, the company has grown as well — and along with it, the number and breadth of products and features that are in development at any given time. So, despite the growth of our team, the need for research is still greater than we can reasonably conduct on our own.

We needed to scale

How might we increase the capacity of Design Research without a huge increase in headcount? Democratization of research was a possibility, but would that be safe? More research could do more harm than good, if it were conducted improperly or used irresponsibly. Among the questions we asked ourselves:

  • Would untrained researchers ask leading questions or misinterpret data, leading to overconfidence in bad decisions?
  • How could we support more people doing research without overloading our Research Operations team with a ton of participant recruiting requests?
  • If anybody can do research, then would we put ourselves out of a job?
  • What types of research should we allow untrained researchers to do, and what types should be reserved for professionals?

Democratization was happening with or without us

Our cross-functional partners were already doing research without us. We’d been facilitating self-serve research through our Real World Wednesday program (more on that below), and we’d been experimenting with allowing access to UserTesting.com. Our Research Operations team was also getting participant recruiting requests from outside the research team. We didn’t want to discourage folks from learning directly from users, but we needed better visibility, and some safeguards, to ensure that this research was being done responsibly.

Not all research is appropriate for a democratized practice

Researchers spend years developing their craft, and we don’t believe that just anyone can interview a few users and come away with reliable insights. One easy safeguard was to empower our partners to conduct only certain kinds of research. For example, we didn’t want to put them in charge of foundational research or longitudinal studies (though we do love to include them as collaborators and observers!). Large datasets require the expertise of trained researchers for effective analysis.

Evaluative and iterative research is where we saw the most promise for democratization. Sample sizes are usually fewer than 10, and sessions can be short, resulting in a more manageable dataset. With a little guidance from an experienced researcher, many forms of evaluative research can safely be led by folks who are new to the field. By empowering our partners to conduct these studies, we are providing them with opportunities to learn some research techniques, increase empathy for users, and answer some of their immediate product questions — all with minimal risk.

A three-pronged approach

There is no silver bullet for safely and effectively scaling the practice of research, so we’ve settled on a three-pronged approach:

  • Facilitate unmoderated remote research (UserTesting)
  • Facilitate low-risk moderated research, while minimizing overhead for the Research Operations team (Real World Wednesday)
  • Give non-researchers the hands-on support they need to understand what is possible, and for maintaining a high quality bar (my own internal consulting practice)

Unmoderated remote research: Not all research requires an hour-long, in-depth conversation. Simply hearing a user talk aloud while they interact with a prototype can uncover a lot of usability and language problems, and even provide some lightweight insight into user workflows.

At Dropbox, we rely on UserTesting, but there are plenty of other vendors out there, including UserZoom and Lookback. With UserTesting, you link to a prototype or mocks that you want feedback for, along with a series of questions you want participants to answer. You can screen participants from the UserTesting panel, or direct your own users to the platform. Participants record themselves completing your test. You’ll receive a series of videos of them interacting with your test material and answering questions out loud.

UserTesting is a good choice for democratizing because:

  • It’s easy: the mechanics of setting up a test are straightforward
  • It’s fast: after launching a test, we typically have videos ready for analysis within an hour or two
  • It’s remote: we can reach participants within a wider geography, outside of the urban centers where Dropbox has its offices
  • Recruiting is a breeze: if you are able to leverage their user panel, then your recruiting team doesn’t have to be involved at all

Real World Wednesday is an ongoing program at Dropbox that is a little like research speed dating. Every other Wednesday we invite a group of five users to participate. We have five teams of Dropboxers who have designs or concepts to test. Each team gets 15 minutes with each participant. In an hour and a half, a team gets useful feedback that can help inform decisions or steer additional research (Huge shout out to my colleague Marian Oman, who has been a driver of Real World Wednesday for the last two years.) See our previous blog post about Real World Wednesday to learn more about the program.

Real World Wednesday is a key part of our democratization program because:

  • It gives non-researchers an opportunity to connect with users directly and build empathy.
  • Moderated tests allow for deeper conversations and richer data.
  • It’s a low-risk way for newbies to try out moderating.
  • The rolling nature of the program minimizes the lift for recruiters and researchers alike. Because it happens frequently, on a regular schedule, the program works like a well-oiled machine.

Hands-on support: We wanted to enable our partners who were doing their own research, and to ensure that the research was being done responsibly, but we were not comfortable simply providing the tools and letting our partners loose to do whatever. So, together with Aruna Balakrishnan (who was my manager at the time), I developed a new role for myself as an in-house consultant, focused nearly full time on raising the quality bar for research that is conducted by our cross-functional partners.

This hands-on consulting is crucial because:

  • The risks of unsupervised research by untrained researchers are just too high, which can lead to low-integrity insights.
  • Untrained researchers aren’t familiar enough with methods and techniques to know what is possible.
  • Untrained researchers tend to frame their questions in terms of business need. When a researcher frames a question in terms of user need, that alone can shift the focus of the entire project.
  • Product teams have more confidence in self-serve insights when they know that a trained researcher was involved.

Internal research consulting

Unlike most of my research colleagues at Dropbox, I’m not embedded in a product team. I spend my days consulting with cross-functional partners about their research activities, and creating and maintaining resources to support their work.

Supporting resources

I’ve developed a support site for each of our democratization programs: Real-World Wednesday and UserTesting.

Each site covers topics such as:

  • Logistics: how to sign up to participate in Real World Wednesday or get a license for UserTesting
  • Templates: document templates for kick-starting a research plan, note-taking, and writing up findings
  • Privacy and security: how participant NDAs work; guidelines on how to protect participant privacy, how to scrub and/or protect PII, etc.
  • Tips and best practices: the heart of these sites, including pages on how to design a discussion guide; how to ask open, non-leading questions; how to listen actively; how to take effective notes; how to test multiple versions; how to match prototype fidelity to the kind of feedback needed; how to approach sample data in prototypes; how to approach analysis in a structured way

When Aruna and I were first developing my new role, we envisioned a lot of training workshops to get our partners up to speed on how to conduct research effectively. But I soon realized that relying exclusively on live training events would be problematic. Tech workers are notoriously job-fickle, with an average tenure of around two years. As people came and went from the company, so much knowledge and training would be lost. So I shifted focus to self-serve resources instead. I have occasionally done workshops, but these self-serve resources are my primary means of conveying basic research concepts.

Consulting

I also do a lot of one-on-one consulting with partners who are conducting their own research. Every project is a bit of a snowflake, with unique questions, personalities, levels of experience, timelines, and political pressures. A knowledge base of resources is great, but it can’t address every nuance and subtlety. Here’s what a typical consulting engagement looks like:

  1. I have the would-be researcher prepare a Paper doc that includes their research goals, a link to their prototype, and the questions they would like to ask participants.
  2. I meet with them to discuss their project. I help refine their research questions, give them direction for the research approach, and offer pointers on what kinds of questions to ask. (This is also an opportunity to point them toward existing research, and sometimes to reassure them that they don’t need new research to answer their questions.)
  3. I let them develop their research plan and gain alignment with their team. I give the plan another review (or sometimes a few more reviews), and offer specific wording recommendations.
  4. If it’s a Real World Wednesday project, then they are ready to participate in the next scheduled session.
  5. If it’s a UserTesting project, I have them code their test in the UserTesting platform. I preview the test within the platform and offer more feedback if needed. We leverage UserTesting’s built-in approval flow to ensure that a researcher has an opportunity to vet all projects created by non-researchers before they are launched.
  6. Then it is up to the individual to review their notes and/or recordings, analyze the data, and report the results back to their team.

Benefits and outcomes

After nearly two years of investment in this program, here are some of the successes that we’ve seen in Design and Design Research at Dropbox:

  • Researchers can focus on bigger strategic projects. When partners can service their own evaluative research needs, it frees up our professional researchers to focus more time on longer-term foundational projects.
  • More research gets done. Last year, I consulted on about 100 projects. That’s 100 projects that would’ve moved forward with less, or even no, input from users if we hadn’t had this program in place.
  • Products are more usable. I wholeheartedly believe that overall usability of our products has improved as a result of empowering more people with the right tools and methodologies to bring research into their product development process.
  • Research is less of a bottleneck for product development. When teams can effectively gather user feedback on their own, they don’t have to wait for an overextended researcher to find time to do it for them.
  • Research is being leveraged by more teams. Some teams don’t have direct research support, and this program enables them to get user feedback on their work as well. I’ve consulted on projects for internal platform teams, customer support teams, marketing and communications teams, and more.
  • Empathy for users increases. All of the Dropboxers involved in this program are getting direct contact with real users, which humanizes the customers we’re building for. Edge cases become people with real needs. Was every decision that was based on self-serve research made correctly? Hard to know! But the empathy for users that our partners have gained in aggregate far outweighs any small “missteps” they might have been made in evaluating individual buttons or features. This skill for listening and developing empathy will serve them in making better, more human-centered decisions for the rest of their careers.
  • Respect for researchers grows. Research teams that are considering democratization often worry that their jobs will become obsolete. Our experience at Dropbox has been exactly the opposite. Over and over again we hear partners who’ve conducted their own research say things like “I had no idea how hard research is!” or “I have so much more respect for what your team does now!”

What about risks?

Responsible democratization of research has a lot of clear benefits. So what are the risks?

Bad research can lead to bad decisions

Quality research requires asking the right questions in the right way, and sifting through participants’ answers to understand the deeper needs that they (often unwittingly) express. It’s an art and a science, and it takes discipline, objectivity, and experience to get it right. If folks doing research don’t have the right tools or skills, they unintentionally can lead teams down the wrong path. I’ve designed my consulting practice to mitigate the risk of faulty research in a few ways:

  • Multiple touch points. I usually provide feedback on a project at least twice, and often three or four times, before it launches; I don’t try to perfect the project in just one meeting. Progress is assessed at multiple touch points, allowing more opportunities for keeping it on track and in line with best practices.
  • Focus on the setup. I dedicate my time and energy to getting the mechanisms of data collection right. If the team is talking to the right participants and asking open, non-leading questions, then the dataset will be reasonably good and the potential damage from less-professional analysis will be minimized.
  • Focus on evaluative methods. As I said before, not all research is appropriate for non-researchers to undertake. Evaluative projects with a small n are within reach for our partners. Most other types of research require more training and experience.
  • I don’t write every guide. It is tempting at times to write the test plan or discussion guide for partners and let them execute on it. Instead I give them some general direction at the beginning and have them write the first draft of the test plan. This will show me what I’m working with. When I read that first draft, I can tell whether the partner has good instincts about how to communicate with users. If they don’t, then I know I need to keep a closer eye on this project.
  • I always explain why I’m suggesting a particular research approach or change of wording. Every piece of feedback is an opportunity for education. Simply seeing the difference between the question a partner wrote and the revised version that I wrote can be immensely instructive. By the time the partner has completed a couple of projects, their drafts are better and they require less input from me.

Researchers will “lose control” over the insights that influence decisions

Real talk: Researchers already don’t have control. Product teams leverage input from a variety of sources to make decisions, and research is only one input. But it is important for an embedded researcher to be consulted about any research that the product team is undertaking on its own. I begin each consultation by asking “Have you already talked with your researcher?” No team that has an embedded researcher should be conducting research without that researcher’s knowledge and blessing.

Analysis is inadequately supported

I focus my consulting practice on getting data collection right, but I have not yet cracked the code of ensuring quality control in the analysis phase. I offer some general guidance (examples: take a structured approach; be mindful of confirmation bias). And I usually offer project-specific advice. But I am not doing any hands-on review of their analysis process or write-ups. By focusing on the research inputs, I’m hoping to minimize the risks of analysis by untrained researchers. So far I think this strategy has been mostly successful, but I am actively thinking about how to offer better support for analysis. If you have ideas, please share them with us @DropboxDesign.

So what do I get out of this?

The success of our program at Dropbox hinges on having someone dedicated to it full time. Obviously, that means headcount. And it requires having someone who wants to do this kind of work. An internal consultant handles a mix of training, research, and research ops. It isn’t for everyone, but maybe you or someone else in your organization will be inspired to start a similar program after you learn what I enjoy about it:

  • I get exposure to work that is happening across the company. My view isn’t complete by any means, but I do see work by teams in all product areas, and even within marketing and other groups. This allows me to connect people who are working on similar or related ideas.
  • I don’t have to do nearly as much long-term planning. When I was embedded in a research team, I was often frustrated by the roadmapping process. The only sure thing about research planned months in advance is that it will look very different by the time you execute on it. As teams learn about the space, as business pressures change, and as a direction from leadership evolves, the research needs will shift as well. In my current consulting gig, I don’t have to be personally involved in this planning cycle at all.
  • Empowering others is gratifying. Helping others succeed is always rewarding. And that extends to our users, because the more they’re taken into account by product teams, the more we can improve usability and delight. Product teams and users both win here, and I get to be part of that.
  • Everyone is so appreciative. People I consult with are often wildly effusive in their gratitude. The consultation is a learning experience for them, it helps achieve their team’s goals, and it helps them make product decisions with confidence. Embedded researchers are focused on bigger foundational projects, and can’t always drop what they are doing to address last minute evaluative research needs. When people come to me for help, I can respond quickly and enable them to do a better job — and they love that.

What does the future hold?

Our research democratization program is still a work in progress. We are continuing to iterate and innovate on strategies that keep our users at the forefront of everything we do. We are always looking for ways to encourage product teams to learn from users effectively and responsibly, including:

  • Marian has transformed our Real World Wednesday program into an online event, due to COVID-19
  • Some product teams have been experimenting with “customer councils” — panels of highly engaged users they can learn from overtime
  • My colleague Jennifer DiZio specializes in international research, consulting with researchers on how to build a global, customer-focused mindset
  • One of our research interns recently piloted a program to help our Mobile team connect more easily with app users
  • I’m thinking about how to provide better support for research analysis, as well as how to help teams launch surveys

What strategies have you explored to scale research at your organization? How have you enabled (or not) your cross-functional partners to connect with users? Please comment below, or share with us @DropboxDesign.

Thanks to Michelle Morrison, Marian Oman, Amanda Miller, John Mikulenka, and Andrew Richdale for their help in putting together this article. And to Kelly Arce for the photo!

Want more from the Dropbox Design team? Follow Dropbox Design on Medium, Twitter, and Dribbble. Want to make magic together? Check out our open positions!

The UX Collective donates US$1 for each article published in our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

--

--

Having worked in UX Research since 2004, I work to scale research through intentional democratization, including successful programs at Dropbox and Airtable.