Yin and Yang: UX and Data
Numbers versus emotions.
Quantitative versus qualitative.
Data science versus humanities.
We’ve heard a lot of debates regarding which one should be prioritized. Some people take a strong stance on one side; others tell us the clichés: “both are important” or “it depends.” While they might all be right, I decided to look at my own experience and share some tips and tricks.
1. Don’t test blindly
A/B testing gained its popularity because setting one up is easy and cheap. But don’t do A/B testing just because you can, as it is very easy to lose yourself into a rabbit hole of testing.
Does this sound familiar to you: test after test, you try to find the perfect formula that gets the most revenue and the highest click-through-rate? However, you find yourself chasing small, incremental wins. In most cases, you won’t find a variation that will fundamentally change the way people use your product. And sadly, more often than not, bad design wins out.
The VP of design at Facebook, Julie Zhuo, once pointed out, “if you put a giant red button that’s blinking in the middle of the page, people are going to click on it.”
I once worked on a very crucial landing page where many key interactions happen. We were constantly making small tweaks on the page (the design, the copy, the content), hoping a ‘champion combination’ would emerge. After a few months and dozens of A/B testing sessions, the ‘champion’ did appear.
But it was ugly. Objectively ugly.
A combination of small tweaks with little design and user-centered justification gave birth to a Frankenstein that worked functionally, but not emotionally. There was a point when everyone who worked on the product decided that a redesign was necessary even if it meant spending more resources and potentially being outperformed by the original design. Fortunately, with some user research and prototyping, our redesign had a landslide win against the ‘Frankenstein.’
The bottom line is that every A/B test should have a reason. You are not just blindly changing variables and turning knobs; instead, you should have a (hopefully) strong hypothesis regarding why it will make a difference and why the users will benefit more with this new variation.
2. Do user research to complement data
Here’s where the ‘Yin’ and ‘Yang’ part comes in. The key for UX and data to work together is to do more user research. Quantitative analysis shows little to no insights as data in itself will not reveal feelings and emotions — a huge part of product design. Even if you have a winning variation, you know you are doing something right that is getting your users to buy in, but you don’t really know why, making it really difficult to learn from the past and replicate success.
And it can be worse. Sometimes people fall into a trap where they look at an A/B testing result and try to conclude user behaviors purely based on the numbers. It’s dangerous because we may always find ways to justify and champion a design after we know the outcome. As I described in the previous section, if you are just shooting in the dark, you may find yourself chasing small wins and forgetting the big picture.
How can you avoid that pitfall? You need to understand the fundamental of your users and their behaviors before you even start testing. And user research helps you accomplish exactly that. For example, why are they visiting (or not visiting) your site in the first place? What are they trying to accomplish? What is preventing them from using your product? These are all questions that can be answered through user research, but not through A/B testing or Google Analytics.
I’m not going to get into the details of what specific user research you can do to devise a stronger A/B test. But if you are interested, Jennifer Cardello wrote this article that talks about how to create better test variations with user research.
3. Data is still the most efficient way to measure success
Above I’ve addressed the limitations of data and A/B testing, and how it can lead people down the wrong path. However, I still think that data is the fastest way to help us measure success. User research reveals what customers think and say, but doesn’t confirm what they actually do in real life. And in my personal experience, after doing so many user interviews and usability tests, I find that a lot of people are not who they say they are. During user interviews, people want to sound smarter, more caring, observant, and better than others. So I also learned not to fully trust what participants say in research.
As Jakob Nielson points out in this article, A/B testing is still the fastest and cheapest way to measure actual behavior in real world conditions. Regardless of how user friendly or well-designed your product is, its main objective is still getting customers to use it. Period. And data will tell exactly whether your product is working for you or not — because numbers don’t lie. That being said, like it or not — data will still be a huge part of product design.
In short, user research can tell you whether you are on the right path or not, but it alone is not enough to tell you where you are in the journey. So if success is the North Star, and user research helps you locate it, I see data as the GPS that lets you know how far you are from your goal. And they are the Yin and Yang of successful product design.
What are your thoughts?
If you enjoyed this article hit that heart ❤️ and share it with the world 👍