In Product Design all hypothesis are equal — but some hypothesis are more equal than others

Mark Shurtleff
UX Collective
Published in
5 min readJun 10, 2019

--

It’s a new day, a new age, a new time. We now spend most of our waking hours, 11 hours + according to Neilson research, attending to digital media, digital apps, or digital services. Product managers and designers are in high demand as more products and services fill niche markets and existing products run on the ever faster “treadmill” of constant upgrades and innovation to feed their growing user base.

It’s also a new day for product design and product management. Products and services need to evolve quickly and continuously. Data informs product decisions and A|B testing processes are now integrated into product development/design processes and service refinement. Companies conduct hundreds of A|B tests per day everyday to improve web sites, apps in an never ending process to improve.

As more product managers, product designers, and developers enter into the data informed, hypothesis structured processes, they may not be aware of the nuances and details to gain maximum benefit from experiments and the flood of information a steady cadence of experiments can produce. Not all hypothesis and not all experiments are created equal. Depending on your goals and needs you can create a range of experiments that will meet your needs, but it requires thought and planning.

First you need to operationalize the goals, all these lofty aspirations you put in those product vision presentations which are valuable as a “north star”, but are most often not measurable. Seamless, fluid, frictionless experience is a wonderful aspiration, but not something you can test. You can test click through rates, time on pages, completion rates for a workflow. You can interject a “popup” questionnaire with a Voice of the Customer (VOC) package with the typical 10% to 20% return rates. These questionnaire metrics provide insights into how well you are meeting your aspiration product and design goals.

In addition to metrics you need to give thought to the set of design and/or workflow variants to evaluate what A|B|C.. variants are worth testing? This is where the product team needs to assess which aspects, and what themes to modify to improve a product or service. What changes within the design system will lead to clearer communication or highlight some desired feature function or workflow? Several options are crafted to compare against agreed upon metrics — the essence of A|B testing.

Examples may help to clarify. A online grocery store may want to highlight online price discounts: perhaps in bold and green or perhaps show the full price as a strike through to the left of the discounted price in green.

Once the team understands the A|B steps and process it just becomes a part of the product process of continual improvement.

  1. create high level goals the “aspirations” for your product and service. Examples: Frictionless, delivering delight, informative.
  2. list the metrics you can obtain and map to your goals. Many metrics will inform to your goals. Typically a set of metrics quantitative and qualitative will give you good insights and map to your goals. Examples: click-through rates, time on page, customer satisfaction scores from short questionnaires.
  3. Given your goals and ways you measure these your team can brainstorm creative approaches to product and service evolution from ongoing optimizations to new innovations in features or workflows or new services.
  4. Once you identify two or three design alternatives you can gain precision and operationalize your hypothesis and refine the alternatives for testing. The process of assessing goals, selecting metrics, and refining hypothesis work together to enable evolution of your product and service. Soon your team will have a pipeline or “backlog” of themes to test with associated hypothesis and metrics.

The alternative is to keep to a process where you implement changes hope for the best. Then do some testing ad hoc. This more “random walk” approach isn’t competitive in the long run in today’s competative market. I was on a project where one of the lead product managers said “Our goal is to make the issues forum more user friendly. We want to update the theme and add more spacing to the Top contributors section”. While the intent is well intentioned this type of thinking puts you and your company at a competitive disadvantage. First “user friendly” is an aspirational goal that needs operational definition. Second the manager interjects a specific design detail based on current layout. Cherry picking design details is just not productive to evolving design. We understand information as a patterns as a visual whole, layout design is a “gestalt” it has information hierarchy, information prioritization, grouping, alignment, colors, type faces, etc. We need alternative design systems not specifics about the current system theme layout. We need the product design team to create wholistic design alternatives based on team knowledge, skills and experience informed by past A|B studies if possible. A professional product team knows about user personas, and task goals, good product teams also keep up with design patterns to inform design alternatives. For most product evolution projects two or three design alternatives is sufficient to evaluate. In this example the design team created alternatives based themes for “increased visibility for solved issues”.

For this example we can look to a set of metrics including satisfaction ratings as measured by delivering VOC questionnaires. We can also look to other more quantified metrics: time it takes for new issues to get the first community response or time it takes for the community poster to “accept” a post as a “solution” that solves their problem.

Also we should be clear that A|B testing for product is all about pragmatic “systems” testing. This isn’t like medical research where we are attempting “tease out” a causal conclusion of a specific independent variable. We are not trying to draw specific conclusions like “drinking 2 cups of coffee daily adds 5 years to your life expectancy.” We are trying to assess which design alternatives best meet our product goals as defined by our metrics. Indeed care should be taken in the conclusions you draw from the testing. We can’t conclude that our new “expert profile” design improvements would give similar improved results on other part of our site when places in different areas in different layouts and design systems. This is all about improving the current section with all the design context surrounding the page.

Data informed design evolution is necessary in todays highly competitive environment. Trying to evolve design with ad hoc “random walk” approaches will put your product and your company at a competitive disadvantage. There are just too many variables at play, and too many options in todays rich product design and development ecosystem. You need to build an A|B testing process that becomes embedded in your product process.

Continuous improvement is needed in today’s highly competitive marketplace. A process for testing improvements operationalized against high level goals will keep your team productive and your product or service competitive.

--

--

Product Discovery & Process, Designer, Futurist, Inventor. Lean methods, cross functional team optimization. Design Thinking & Doing. Design for Humanity