Designing with Data
Interpreting and analyzing data as a designer

Statistics help us summarize and understand the hard data we collect, and instincts do the same for all the messy real-world experiences we observe. And that’s why the best products — the ones that people want to use, love to use — are built with a bit of both.
— Braden Kowitz
For many tech companies, design and data are intertwined. Companies work amid a constant stream of data detecting the impact of every minute change, and rely on teams of analysts, data scientists or engineers to continuously monitoring hundreds of metrics and multiple iterations.
While design instincts are still valuable, data and analytics can help you hone your product understanding and ensure your decisions satisfy stakeholders. Here’s some things to keep in mind when working with data:
Novelty effect
Definition: the tendency for performance to initially improve when new technology is instituted, not because of any actual improvement in learning or achievement, but in response to increased interest in the new technology.
For data analysis, this might mean a period of time when you release a new tool, that the results are surprisingly positive.
Tip: Be wary of early results which are “too good to be true”, this might actually be attributed to a change appearing to be better simply because it’s new.
Regression toward the mean
The rule goes that, in any series with complex phenomena that are dependent on many variables, where chance is involved, extreme outcomes tend to be followed by more moderate ones. Simply put, “things even out over time.”
Tip: The way to address this effect is by 1) Having a control group 2) Being wary of results from small sample sizes
Hawthorne effect
Definition: The alteration of behavior by the subjects of a study due to their awareness of being observed.
Tip: Choose to do a double-blind study — in which neither the participants nor the experimenters know who is receiving a particular treatment. And make a practice of using automatic recordings to ensure independence from individual data collection.
Confirmation bias
…when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound.
―Daniel Kahneman, Thinking, Fast and Slow
When faced with high stakes deadlines and limited resources, there may be some people who are only interested in numbers that support their decisions. After all, investing so many resources and time into an experiment can have you pinning your hopes on a certain result.
Tip: Make sure to be aware of your own/your team’s bias — the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories, when interpreting your data results.
Instrumentation effect
Definition: the changes in the instrument, observers, or scorers which may produce changes in outcomes.
Tip: There are no tricks to beating this one. Test your experiments before they go live with different browsers and devices. Get another pair of eyes to conduct quality control in your experiment process to ensure any bugs or issues are addressed before the test goes live.
A/A test
Running an A/A test is much like an A/B test, except in this case the two groups of users, which are randomly chosen for each variation, are given the exact same experience.
When in doubt, it is good to check the quality of the execution (choice of variation and stickiness), data collection and integrity of the tool, and that no data is lost or altered.
Twyman’s law
Named after Tony Twyman, a media-research analyst, Twyman’s law states: if a statistic looks interesting or unusual it is probably wrong.
Results can be skewed for a number of reasons — aforementioned bias/effects, data anomaly, bad design or test conditions.
User Segmentation
Knowing who your customers are is great, but knowing how they behave is even better.
— Jon Miller
You can segment your user base into different groups based on demographics (e.g. gender, age) or by their behavior (e.g. purchasing behavior, engagement level, user status).
Prioritize Key Metrics Over Local Metrics
One important point to highlight is to beware of changes that improve an easy to move local metric (clicks to a feature, acquisition), at the expense of important key metrics (revenue, retention, overall experience).
One such example would be Blue Apron — a company that has the largest share of sales among U.S. meal kit companies but lower customer retention. While on the surface, it appears that the company is doing well, churn and high costs mean the bottom line suffers.

Align on the Same Key Metrics
What gets measured gets managed.
— Peter Drucker
Teams that have the same language around successful product metrics are ones that are focused and aligned to chasing the right ones.