UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

When design friction is a good thing

Note: This is an academic paper and combined effort between myself and 3 other graduate students in the program of Human Computer Interaction at ISU.

Graphic illustration of 2 UX Designers and 1 Engineer making design decisions

The concept of system 1 and system 2 thinking has been popularized by behavioral economist Daniel Kahneman, and has been the subject of a great interest to the academic and interaction design communities. Kahneman defines system 1 thinking as automatic thinking that involves little rational control, or “quick thinking,” and system 2 thinking as calculated, effortful thinking that requires active attention, or “slow thinking.”

As researchers, we’re seeking to better understand system 1 versus system 2 thinking, and how an understanding of these two systems of thinking can be utilized in interface design to encourage users to move into system 2 thinking when necessary to ensure the accuracy of a task. Design frictions can be used to force a user into system 2 thinking while completing certain tasks; we seek to compare the user’s ability to complete tasks in which design frictions are present and evaluate whether that leads to greater accuracy as a result of moving toward system 2 thinking.

Understanding how system 1 versus system 2 thinking is incredibly important for interface designers. Forcing users into system 2 thinking — possibly at the cost of time to complete a task — when they are completing difficult tasks or tasks with a large amount of risk can ensure that users are able to complete tasks with less error. An understanding of when it’s appropriate to make a trade off to ensure users engage with system 2 thinking can result in creating interfaces that are ultimately easier to use and less error-prone. Furthermore, an understanding of system 1 versus system 2 thinking can be beneficial to ensure interfaces are designed ethically. A solid background in how system 1 versus system 2 thinking impacts how a user interacts with an interface can be used to advocate for users and to ensure interfaces are not manipulating those users by encouraging them to never move past system 1 thinking.

The Research Shows…

Past research studies have looked at uses of system 1 thinking versus system 2 thinking and their influence on a variety of cognitive functions including attention span (Schutten et al., 2017), problem-solving (Hamalanen et al., 2015), decision making (Tay et al., 2016; Rottenstreich et al., 2007; Mishra et al., 2007), and risk analysis judgement (Aven, 2018). Only a small number of these studies have included system 1 thinking and system 2 thinking in technology. These studies include multitasking while consuming media (Schutten et al., 2017), problem-solving using different aspects of technology- email, internet or word processors (Hamalanen et al., 2015), and trust in a website or email based on task interruption (Egelman et al., 2008).

When it comes to system thinking and design, Egelman et al., compared different warning messages between active and passive design and web browsers. They found that effective design frictions must interrupt a user’s task, must be distinguishable from the rest of the interface, and must feel trustworthy so the user believes they should take action through it. This is the only research study available that combines both system 1 versus system 2 thinking and design.

For all of the research studies we have analyzed, methods included the completion of one or more tasks and several different potential outcomes for those tasks. Outcomes of these studies indicate which thinking system was used. Depending on the study and intended research outcome, researchers may or may not have manipulated certain tasks to encourage or discourage system 1 versus system 2 thinking.

We defined system 1 and system 2 thinking briefly in the introduction. System 1 thinking doesn’t rely heavily on cognition and is formed through associative learning methods, while System 2 thinking is deliberative and is constrained by working memory capacity (Mishra et. al. 2007). This aspect of cognition is an important factor in taking a user into system 2 thinking, but we also don’t want to overload cognition because that can revert a person back to system 1 thinking (Rottenstreich et. al., 2007). Design frictions are a relatively new term used within interface design, and many of our articles have not given a clear definition to what this is. In the study conducted by Egelman et. al. about browser warning effectiveness, it was shown that warnings that actively interrupted a user’s task and gave them a clear choice aided in interaction with a warning. We have inferred from this study that a design friction is something that interrupts the flow of a task enough to force a participant to make a decision using cognitive processes.

We do know that system 1 thinking and system 2 thinking do work together (Tay et. al., 2016), and that system 1 can interfere with system 2 thinking (Mishra et. al., 2007). Pinpointing when a participant has utilized system 1 versus system 2 thinking has been simplified to understand through building a method where the participant completes tasks that involve evaluation of two options, then making a choice between the two. Many studies have used an incorrect/correct measurement to know if system 2 thinking has been used, because the correct choice means one type of thinking has been used (Mishra et al., 2007, Tay et. al., 2016 and Rottenstreich et. al., 2007). This is how we have decided to measure system 2 versus system 1 usage. We are also able to use this measurement to understand errors made by the participants which is a component of measuring task accuracy. One concern is that with giving the user an explicit comparison to make, they may be more risk-averse in their decision making (Simonson, Aven 2018).

The Theoretical Framework Behind System Thinking

Much of the research has shown how system 1 versus system 2 thinking affects certain contexts, such as the training for medical students, making choices between healthy food options, and when someone has to do something from memory. The relation between system 1 and system 2 has been thoroughly studied; however, there is a lack of research around what specific design patterns can be used to trigger system 2 thinking, especially in the face of multitasking or cognitive load. Are there ways that designers could be aiding users in making the correct choice because they are utilizing a more rules-based and methodical way of thinking? We see this gap as an opportunity to build off past research that has been more specific in order to generalize findings around how system 1 and system 2 thinking affect interface design.

With the goal of advancing research for interface design that encourages system 2 thinking, we came up with our research question: Can leading people into using system 2 thinking while completing a task can increase the task’s accuracy, and can design choices encourage system 2 thinking through the use of design frictions? Our research team all has an interest in furthering user experience design, and we want to build interfaces that support system 2 thinking. We want to find out if design frictions may be useful in achieving this. We hypothesized that adding these design frictions will lead to participants utilizing system 2 thinking more than system 1 thinking. Additionally, we hypothesized that if participants are utilizing system 2 thinking, then they will have higher task accuracy.

Here’s How We Did It

In order to answer this question, participants in our study completed a test that included a series of events that introduced the participants to design frictions. Two groups of participants were evaluated, broken down into subgroups, and compared against one another in order to determine the effects of the design frictions on a participant engaging in system 1 or system 2 thinking. The task was designed in a way that there is a clear choice; if the participant chooses one option, it indicates the use of system 1 thinking and if they choose the second option it indicates system 2 thinking. The design friction aids the user in choosing the option that utilizes system 2 thinking. We have measured accuracy through the number of errors the user makes during the experiment. It is considered an error if they choose the option based on system 1 thinking, switch their answer from an incorrect answer to an incorrect answer, or change their answer from a correct answer to an incorrect answer. While this experiment is predominantly quantitative in nature, we have also recorded the participants during the test for later qualitative analysis to support task success or failure.

Participants

We wanted to be able to generalize our results to a population that is within the 20–30 age range, has completed at least a GED level of education, and is semi-affluent in technology and computer usage. To fit this criteria, participants were recruited from Iowa State University campus. Random sampling was used to fulfill an equal number of women and men participants, but since we are recruiting from one location there is a convenience factor that came into play. Recruiters asked students walking across campus and exiting classrooms if they would be willing to participate in an experiment until the participant limit was reached. Willing participants were given a flier with the experiment location and time, and they were compensated $15 for completing the experiment. With an alpha error of 0.05, a power value of 0.95 and a power analysis and a desired medium effect size of 0.5, we determined that our sample size should be around 50 participants per group. We sourced 150 potential participants, but after creating two groups and balancing for gender, level of education and technology usage, we had two groups of 50 participants.

Study Design

This research study was a snapshot experiment, where the recruited participants were randomly grouped to complete the tasks in one sitting. Our data gave us a cause and effect relationship due to the experimental design so we used inferential statistics in our analysis. The test took 15–20 minutes maximum for the participant to complete. In order to ensure accurate results, and to ensure that they can be applied to the general public, participants were split into gender groups with equal amounts of males and females assigned to each condition. This was a between-subjects experiment; completion of the test would mean participants were already exposed to the questions or design friction and would not be able to retake the test under another condition without experience bias.

Participants were given a test with three multiple-choice questions that had one correct answer. On completion of the test, two of the groups were faced with design friction that aimed at keeping them in system 2 thinking. The independent variable was the type of design friction being used to push the participant to use system 2 thinking. Variables were assigned at random to an equal number of participants. Dependent variables included task accuracy and use of system 1 versus system 2 thinking. If participants were able to answer the questions correctly, we know they used system 2 thinking. If a participant encountered design friction, then returned to their questions and changed their answer to the correct option, this was also considered a use of system 2 thinking. This data is nominal and based solely on correct/incorrect answers, but it may be further expanded through screen recording data. Accuracy was measured through the number of errors the participant made during the experiment. It is considered an error if they chose the option based on system 1 thinking or changed their answer on a question to an incorrect answer. The average error calculation is continuous data, but the raw data for errors is nominal because it is measuring a frequency.

Procedure

For the experiment, participants were separated into three groups based on proper sampling. Participants took the test concurrently so the test occurred under similar conditions regarding time of day and environment for all groups. We used space with a waiting area and three separate yet similar, quiet rooms for each of the groups. Each participant was given a computer, a desk, and a chair to use for the experiment.

The participants received initial instructions regarding the experiment verbally, and then the test and the design frictions were administered visually on their individual computers. The desks were close enough so participants could hear the verbal instructions regarding the experiment, but were adequately spaced so other participants were not distracting. The computers used a Windows processing system, had a keyboard, and had a mouse so that users could use whatever input method was most comfortable to them. The participants took a practice exam with similar questions to the questions that would be given on the actual test. This was to familiarize them with the tools they would be using and to prevent them from taking excess time for task completion. Participants also filled out a brief demographics questionnaire to gain basic information about each participant, including their familiarity with technology.

Participants were split into two groups: one that encountered active design friction and one that encountered passive design friction, based on the previous research conducted by Egelman et al (2008). The computer test was based on the Cognitive Reflection Test, where three questions are asked and participants must utilize simple math, attention, and the question’s wording in order to answer correctly. After selecting an answer and clicking submit, Group A was presented with a pop-up that covered the entire screen to remind the participant to check their answers. Instead of a pop-up that covered the entire screen, Group B was presented with a small pop-up that came into view in the top right corner of the screen to remind them to check their answers. Both groups were able to choose to do a check of their work or to continue on to the next screen to indicate that they completed their test. On this second screen, participants had an additional opportunity to go back to their test and change responses. If participants were truly finished, they selected a button that said “Submit my Answers” that finalized their test.

During each session, the participant’s screen activity was recorded and there was a camera on each computer to record the participant and any commentary they made during the test. Click event tracking was used on the test to determine how many times the questions were answered and changed. Screen recording was used as supplemental information to reinforce the number of errors made. Each completed test was automatically exported into an excel file so all results were easily compiled. Once a participant was done taking their test, they were given a $15 reimbursement and could then leave the testing facility.

And The Results Show….

We set out to answer the question: can leading people into using system 2 thinking while completing a task can increase task accuracy, and can design choices encourage system 2 thinking through the use of design frictions? In order to answer our research question, we needed to show whether system 1 thinking or system 2 thinking occurred when design friction was implemented. We also sought to understand if there was a change in task accuracy if system 2 thinking was used more than system 1 thinking. Our two groups included 50 people per group, where one group encountered active design friction and the other encountered passive design friction. The main goal of our analysis was to understand the effect that the two different design frictions had on participants’ system 1 versus system 2 thinking and task accuracy. Group A experienced active design friction in the form of an overlay pop-up that covered the entire computer screen, while Group B experienced passive design friction in the form of an upper corner pop-up that did not cover any screen content. Due to this design, Group A and Group B were split into sub-groups within these larger groups according to whether or not they responded to the design friction. The diagram below illustrates this grouping structure.

Diagram 1: Group Structure

Use of System 1 versus System 2 thinking

To understand the system thinking of a participant, we looked at the number of correct answers before and after interaction with design friction. The questions were answered correctly under 50% of the time. This low number of correct responses signals that many participants fell into system 1 thinking when completing this test initially. Not a single participant answered all questions correctly on their first attempt in either group. After the design friction was introduced, the number of correct responses increased for each question. A further breakdown of each participant’s response to the design friction is shown in Table 1. The table is broken down into three different questions answered, showing how many participants answered them correctly before and after the design friction. The number of correct answers increased for the groups where participants responded to the design friction and went back to double-check their answers. A percentage change between the two situations is shown to indicate where there was a major increase in answers changed.

Introduction of Design Friction

When faced with active design friction, 84% (42 out of 50) of people responded, while only 52% (26 out of 50) of people responded to the passive design friction. As seen in Table 1, no answers were correctly changed for Sub-groups 2A and 2B since they moved past the design friction without going back over their answers. More people changed their responses for the active design friction than the passive design friction. Not as many people responded to the second design friction as they did to the active design friction. Overall, the number of median correct answers per participant increased after the design friction was interacted with. This data is shown in Table 2, where the average number of correct answers stayed the same for groups that did not respond to design friction.

The response to design friction and the change in answer shows that system 2 thinking was used over system 1 thinking. To understand results between the two groups that responded to different design frictions and between sub-groups within their larger group, t-tests and binomial pairs tests will be conducted. The goal from these tests is to understand if design friction caused participants to rethink their answers thus engaging in system 2 thinking. Furthermore, to determine that these results are valid based on the assumption that our null hypothesis is true, we would once do a chi-squared between Sub-group 1A and Sub-group 1B for both the number of responses to the design frictions and the questions changed afterwards. We would also do a logistic regression on these results due to the binary nature of the data. Overall, we want data that can be compared between the number of changes to correct answers and the response to design frictions. This will help determine if there was a cause and effect relationship between the presence of design friction and system 2 thinking and if system 2 thinking increased task accuracy.

Task Accuracy

To understand task accuracy, we looked at the number of errors made. Errors are based on the number of questions answered incorrectly and the number of times a participant changes their answer but their answer remains incorrect. Also, if a participant chose to not respond to the design friction, an error was added. The average number of errors was higher when design friction was ignored by the participant. These results are shown in Table 3, where errors were recorded before and after the design friction, and then summed and averaged to give a total number of errors. Total number of correct answers is also included to see how those two numbers compare.

Errors Made by Participants

There were times when errors were counted if a participant went back and changed their answer, but their answer was still incorrect. The occurrence of this is depicted in Table 4, where the Sub-group 1A and 1B are shown (as they were the only groups that went back and changed their responses) and how many of those changes resulted in correct answers.

Changing an Answer Due to Design Friction

To understand if it is possible to get these results based on the premise that our null hypothesis is true, a chi-squared analysis of the number of errors at each task between the two groups was conducted. There was an increase in correct answers when design friction was introduced, meaning that there was an increase in system 2 thinking. For the groups where system 2 thinking was used, there were fewer errors. Further parametric and nonparametric testing will need to be done to see if these two unrelated things, system 2 thinking and errors made during a test, are in fact related. A Mann Whitney U Test and/or logistic regression could be used to further understand a causal relationship to disprove our null hypotheses.

Discussion

As researchers, we sought to uncover the potential for the encouragement of using system 2 thinking over system 1 thinking through the introduction of design frictions. We measured the number of correct answers participants submitted on a CRT (cognitive reflection test), when introduced to either a passive or active design friction. We also measured the number of errors made during the test and interaction with the design frictions.

Based on our results, we were able to conclude that design frictions did indeed trigger system 2 thinking and improved task accuracy. We are also able to conclude that active design friction had a higher likelihood of inciting a response (and therefore a correct answer), than passive design friction. Because participants who responded to design friction were more likely to answer the question correctly, it can be said that engaging in system 2 thinking improves the accuracy of a task. This could be a viable tool in interface design when users complete tasks that require precision and accuracy over speed. Finally, as the active design friction had a higher response rate, leading to increased system 2 thinking, we can conclude that design frictions that grab a person’s attention are more likely to trigger system 2 thinking.

Potential factors that could have impacted our results include only sampling college students; expanding the population could affect the results and further generalize them. Additionally, we did not take into account users’ backgrounds, but aspects such as education or frequency of technology usage could also affect the results. Another aspect that may impact the ability to generalize the design friction method we chose is that the participants took a test instead of completing a task using an interface. An interface task, such as completing a checkout process, should be tested using the active design friction to see if it enhances system 2 thinking at a similar rate.

While we found plenty of research on both design friction and system 1 vs system 2 thinking, there were no studies that evaluated these working in conjunction with one another. Our study was the first that we are aware of that looked at the relationship between the two, sought to determine if design frictions could influence the use of system 2 thinking over system 1 thinking, and compared both active and passive design frictions.

Due to the limited research in the area of comparing system 1 thinking to system 2 thinking, there plenty of opportunities for further research on these topics. A study dividing groups by age could determine if a specific age group is more responsive to design frictions than another. Furthermore, our research used CRT (cognitive reflection test) questions, but future research could look at other, more realistic scenarios such as online shopping, entering personal data, signing up for subscription plans, or creating goals to determine how consumers’ actions and/or decision making is influenced when design frictions are introduced. Additionally, it could also be useful to do a longitudinal study to determine if repeated exposure to design frictions would help users engage system 2 thinking more often than system 1 once the design frictions were removed. Alternatively, repeated exposure to design frictions could have a ‘numbing’ effect on interface users, and be less effective over time.

Having learned that design frictions can encourage system 2 thinking over system 1 thinking, that task accuracy increases with system 2 thinking, and that active design frictions are more effective than passive design frictions, the next step is to determine the ‘real world’ situations where this knowledge can be applied, and implemented. In addition, researchers should further examine exactly what aspects of active design friction are more likely to lead to a response. The next question to answer is whether there ways to further increase task accuracy through the ‘best’ active design friction.

References

Aven, T. (2018). How the integration of System 1-System 2 thinking and recent risk perspectives can improve risk assessment and management. Reliability Engineering & System Safety, Volume 180, 237–244.

Egelman, S., Cranor, L.F., Hong, J.I. (2008). You’ve been warned: an empirical study of the effectiveness of web browser phishing warnings. CHI ’08: Proceedings of the

SIGCHI Conference on Human Factors in Computing Systems, Pages 1065–1074. Hamalanen, R., De Wever, B., Malin, A., Cincinnato, S. (2015) Education and working life: VET adults’ problem-solving skills in technology-rich environments.

Computers & Education, Vol 88 (38–47).

Mishra H., Mishra A., and Nayakankuppam D. (2007) Seeing through the Heart’s Eye:The Interference of System 1 in System 2. Marketing Science, Vol. 26, №5, pp.666–678.

Rottenstreich, Y., Sood, S., & Brenner, L. (2007). Feeling and Thinking in Memory‐Based versus Stimulus‐Based Choices. Journal of Consumer Research, 33(4), 461–469. Doi: 10.1086/510219

Schutten, D., Stokes, K. A., Arnell, K. M. (2017). I want to media multitask and I want to do it now: Individual differences in media multitasking predict delay of gratification and system-1 thinking. Cognitive Research: Principles and Implications, 2 (8).

Simonson, I. and Utpal, D. The Effect of Explicit Reference Points on Consumer Choice and Online Bidding Behavior. Marketing Science, Volume 24, №2, 206–217

Tay S. W., Ryan P. M., & Ryan C. A. (2016). Systems 1 and 2 thinking processes and cognitive reflection testing in medical students. Canadian Medical Education Journal, 7(2), e97–103. https://doi.org/10.36834/cmej.36777

The UX Collective donates US$1 for each article published in our platform. This story contributed to Bay Area Black Designers: a professional development community for Black people who are digital designers and researchers in the San Francisco Bay Area. By joining together in community, members share inspiration, connection, peer mentorship, professional development, resources, feedback, support, and resilience. Silence against systemic racism is not an option. Build the design community you believe in.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Written by Laura Hedeen

User-centered product designer with a lot of thoughts on culture, design, and getting hired in 2020. Traveling around the world since 2013.

Responses (2)

Write a response