A/B testing done wrong
A/B testing is one of the most beloved method to validate ideas for data-driven product teams. It has become so popular that many organizations sift everything through the A/B test funnel. And as a consequence — A/B testing results became the neutral decision-maker. Does this lead to excellent products? Not always.

What is A/B testing?
A/B testing is a method of comparing two versions of a webpage or app against each other to determine which one performs better. A/B testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. — Optimizely.com
While experimentation is an essential part of human-centred design, there are a few common misconceptions about what questions it can and cannot help to answer. In real teams, abuse of A/B testing often results in poor product decisions and weaken processes that lead to them.
Mistake 1: Relieving responsibility for decisions — lack of Product Vision.
Making decisions is scary. The bigger the product is, the more damage a wrong decision can make. Naturally, teams look for ways to secure themselves from mistakes and to be sure that changes they make will give positive results. A/B testing addresses exactly this problem — it creates a safety net. Or does it really? It gives a sense of confidence that anything that successfully passes through the A/B funnel leads to positive improvement. Teams tend to experiment more, but with what?

I see that immature data-driven teams invest less into a deep understanding of a problem and qualitative research methods. User empathy, Vision, Strategy, Impact goals, High-quality design — all loose value in the light of A/B testing. Putting it in other words — they throw spaghetti against the wall and hope something will stick. It reminds me of The Bike Helmet Paradox: wearing a helmet may create a false sense of security, and as a result, cyclists in helmets end up in more accidents because of taking more risk.
It is often the problem in big bulky companies with many teams with their siloed OKR’s. Teams are locked within their narrow area of responsibility and try anything to push their metrics tiny bit up. It leads to the second mistake — focusing on minor product adjustments instead of addressing real problems.
Mistake 2: Loss of focus
It is the problem of modern humanity — we try to push limits of productivity and fill our days with lots of easy mini-tasks, but do not find time for important hard life goals.
The story goes that life is like a jar of rocks: If you begin with the small ones they’ll never all fit. But if you put the big rocks first, virtually all the rest will fit into place.
— Stephen Covey, author of The 7 Habits of Highly Effective People,

Same in products — we can endlessly make safe little improvements, but never find time or courage to make really significant changes. If a customer wants to buy those shoes in your web-store, she will press the button regardless if it’s green or blue. Every team has limited time, and it’s a choice if they spend it on questioning every step, or trust gut and do the difference. It takes more effort to tackle real challenges and make bold moves, then to shuffle doing micro-optimizations.
Mistake 3: Experiments are raising questions, not giving answers.
The common mistake is to settle on A/B the results without understanding the reasons why one version performs better than another.
For some tests understanding reasons are less important, so A/B testing help to decide faster. For example, such as choosing the most attractive Movie Artwork or a headline . Nonetheless, in most cases, you should understand why one variation converts better than others. Once I was told a story about how a big marketplace company did a redesign of their search filters — an essential step of their core user flow. They have spent a few months on designing and developing a new filter component. Then after a series of A/B tests, the new version kept performing poorly. The decision was made to revert to the old search. Not refine it, not run qualitative research to understand the reasons, just to revert it back. Has A/B testing helped them to improve the product or move faster? I think, quite the opposite.
A/B testing won’t always give you the instant answers, you still need to do your homework — make a hypothesis carefully, make it clear what problem needs to be solved and how to measure it, do your user-research, design in iterations, user-test in between, and understand the real reasons why users behave one way and not the other.
Mistake 4: conflicting A/B tests.
If a company takes A/B testing seriously, many tests on might be running simultaneously, often on the same page. And every team tries to make their block to convert better. Needless to say, that changes in one part of the page/user flow can affect changes in others. If the Ads team did a really good job to convert the ads on the home screen, Sign up button might suffer because of that. I can not resist the urge to use the Booking.com example here. They use A/B testing a lot in their work process. Here is the top of their listing page. Can you imagine how many clicks the giant “Search” button steals from the “Reserve” button? I guess the listings team tried to balance it with placing a few more “Reserve” buttons different places on the page. Sounds ridiculous, is not it?

Every team has reached the most optimized versions of the blocks they “own”, while the end result is a total mess. Think well of your real success metric and see how all other optimizations contribute to it.
Mistake 5: Changing behaviour takes time to adopt.
Changing existing user behaviour is difficult; users take time to learn and adopt the new behaviour, which might result in the metrics dropping immediately after the release. They can even rebel and stop using your product. Do you remember how Facebook changed their Profile layout to the timeline years ago and thousands of people expressed their hate and stopped using Facebook a few years ago? Stopped for a while. Imagine if they run a month-long of A/B test on it, we would be still stuck with the old Profile page. So when you are trying to change user behaviour, be very thoughtful about your validation process. If it’s an A/B test, try to focus on new users who would have to learn a new behaviour anyway. And if you are testing it with the old users — be prepared for resistance and run the tests long enough to overcome the adoption period. And you’ve gotta do it if you want to make the difference! There are tons of examples of established products that can’t do any radical product changes because they can’t afford to lose money while their users adopt new behaviours.
Conclusion
A/B testing is not a silver bullet and will solve all your product problems; it can not replace the strong Product Vision but often distract from it. So use it wisely, combine with other methods, do not get obsessed and sometimes ..just be bold and trust your gut.
Photo credit @andremouton