The design of no-brainer plan choices
A practical example of how the decoy effect can ease users’ decisions when designing pricing plans.
Being part of a community who share the same values about food can increase the chances of creating healthier individual habits¹. In this article, I show the redesign of a pricing plan to increase user’s preference for joining an online health community without altering the original services offered. To this purpose, I leveraged one of my favourite Behavioural Economics phenomena: the decoy effect. Statistical analysis showed that users preferring to join the community increased from 60% to 85%.
The decoy effect
The emergence of SaaS brought to users a wide range of purchase options. Monthly or yearly? What services? For you or your team? Psychology tells us that too many options may lead to unwanted consequences, like impairing choice and decreasing motivation. A cognitive phenomenon called “Choice overload”. However, if alternatives are carefully selected, the addition of options can produce the opposite effect, guiding people’s choice and decreasing the cognitive load. This is what happens in the case of the decoy effect: when an individual is faced with two options perceived equally valuable, they will tend to prefer one of the two options when a third unattractive one is added. This third option is called “decoy”.
The third option should be selected considering the rules behind human perception. Above all the decoy must be “asymmetrically dominated” by the option we want users to prefer. This means that the third option should be similar but slightly worse at least in one attribute than the option we want to emphasise. I will not dive into the details about this psychological phenomenon, but for those who might want to know more, I suggest starting from a recorded lesson by Dan Ariely, where he beautifully explains this effect using as an example The Economists’ subscription plans. For few more examples have a look at Stephen Kahn’s article on The Conversation, who provides further comments to Ariely’s lesson
Here I am going to show a practical example of how the decoy effect can be applied to guide user’s choice. First, I will create a pricing plan with two options where preference is split 50/50. Then, I will redesign the services offered to create a third option (the decoy). The addition of the alternative is expected to move preferences towards one of the two original alternatives.
The original pricing plan and its redesign
The starting point: a 2-option pricing plan
To start, I ran a comparative research about pricing plans webpages, scouting different platforms (e.g. Booksy, Dunked, Klets). Following their example, I used Figma to design a likely pricing plan webpage for a fake health community I named HealthyU. Below I reported the 2-option pricing plan where users’ preference is going to be split evenly between the two options.
The image shows HealthyU services. It gives the user the chance to choose between two different options. The first provides the subscription to a weekly newsletter. The second gives to the users the access to the online health community. The application of the decoy effect can increase the chance users will join to the community plan without altering the number or the types of services offered. Although pretty basic, I hope this will serve as a good starting point to learn about this less known bias.
Add a third option…to reduce uncertainty
The figure below shows the renewed pricing plan with the added alternative between the originals. Plan A is unaltered. The original Plan B remains on the right side but is renamed Plan C. Plan B is the decoy option. Being better than Plan A but slightly worse than Plan C, I expect Plan B will move the preference towards the healthier plan, where users join the community.
Validating the hypotheses
Recruitment and analysis plan
The decoy effect has been demonstrated in many studies and deemed rather robust. A small sample of participants should be enough to get meaningful results. Since this is a personal project, I kept the budget low. So, websites that offer full-fledged research services were not considered. Instead, I relied on Qualtrics free plan and Amazon Mechanical Turk (henceforth, MTurk). Long story short, I connected the two services by creating a random shared ID that helped me tracked respondents –keeping safe their anonymity.
For this experiment, we have two conditions: the original 2-option plan vs. the 3-option plan. I recruited 20 participants for each one. To test the hypotheses we are going to compare the preferences within each condition.
The null hypothesis states that the users’ preferences are not different from random chance. In other words, the proportions of users that choose one option or the other are split 50/50 (note that for the 3-option condition we can collapse the responses: joined/not joined to the community.). The alternative hipothesis states that preferences are biased towards one of the options .
So, for the 2-option pricing plan we would expect to confirm the null hypothesis. That is, users’ perceived the two options equally valuable. On the contrary, if the decoy worked, for the 3-option pricing plan we would expect to reject the null hypotesis and thus one of the option is favoured more than the other.
After 12 hours I got the responses (6h + 6h, since Qualtrics allows only 1 active survey). I paid the participants and downloaded the responses. Overall, the total expenses for the study were about £5 (£0.10 per respondent for a 1-minute question, plus MTurk service fee).
Did users change preference?
First, let’s have a look at the preference for the 2-option pricing plan (see table and plot below). It seems that the preference was almost evenly distributed (Plan A: 40%; Plan B: 60%). To move behind intuition, we can perform a z-test to assess if there is a significant difference from a random distribution (50:50). This kind of operation can be done in R with a single line of command²: ‘binom.test(x=c(12,8), p=0.5, conf.level = 0.95)’. If the resulting p-value is above 0.05 then the difference between the proportions is not significant. In that case, we can effectively claim preferences are split half and half. For the observed proportions, the returned p-value is 0.503. So, for the 2-option plan, users’ prefence was equally split between the two options. This means none of the two options was perceived better than the other. Indeed, the first option was cheap but involved only the newsletter; the second option granted the access to the health community but was expensive.
Now, let’s evaluate the preferences when the 3-option pricing plan was presented. Looking at the bar plot, the majority of users (more than 60%) preferred Plan C (again, equal to Plan B in the 2-option plan). This might seem contradictory given that Plan C is equal to Plan B from the 2-option plan. This shift of preference is the result of adding an option in the middle that is better than one option, but slightly worse than the other one.
We can collapse plans B and C into one option because our goal is to test if the preference moved from the newsletter to the health community. The resulting table of preference becomes the following.
Again, we can use the z-test to assess the statistical difference between the proportions typing in R ‘binom.test(x=c(17,3), p=0.5, conf.level = 0.95)’. The returned p-value is 0.003, below the threshold of 0.05. Therefore, the 3-option plan, which used the the middle option as a decoy alternative, increased the preference for joining the community. I also estimated the 95% confidence intervals and added them to the bar plot (see below) to visually inspect the magnitude of the effects.
This is the “power” of the decoy effect. When carefully selected, the addition of third alternative can guide the choice towards the optimum decision easing the cognitive load. If you look at the 3-option plan, isn’t quite easy to imagine how anyone would look at it and go like “Well, if I can get more for the same price, it is a no-brain choice”?
A final note
Although the results are promising, it is important to be aware that the experiment could only measure users’ intention to subscribe to plans (the survey asked “Read the plans available and their cost. Which one would you prefer?”). The question now is, after being nudged to subscribe to the health community, do uses actively engage with it? Future research should be planned considering collecting data about subscribers’ behaviours.
Some of you might have noticed that the decoy option was supposed to be unattractive. That is, I did not expect someone to choose Plan B from the 3-option pricing plan because I assumed it would be irrational to choose a service that gives you less than the immediate alternative with the same price (i.e. Plan C). Honestly, that’s the beauty of the human mind, but on the practical side, if this would happen in a business context, I would suggest planning some interviews with those users and try to understand what is wrong with the newsletter service.
Footnotes
- See for instance the article by Kevin Hwang at VeryWellHealth.
- If you want to know more about R, have a look at the official webpage by the R Foundation “What is R?”.
Hope you enjoyed this short example of how psychology can be practical. I would really appreciate to hear from you, drop a comments and share your thoughts!
Bonus: Did you know this psychological bias violates one of the basis economic principles?