Part 4

Design of Responsible AI: imbuing values into autonomous systems

Part 4 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.

Marion Baylé
UX Collective
Published in
11 min readMar 22, 2019

--

Ethics and Design

“Ethics as a discipline explores how the world should be understood, and how people ought to act” (Burton et al., 2017).

According to Cennydd Bowles, “design is applied ethics” (2018, p.4).

This strong statement suggests that design practitioners are conscious that each decision they make during the design process is an ethical one. When we talk about ethics, we often understand it as what is the “right” thing to do? However, this question becomes quickly hazy as what is ethical for someone can be unethical to someone else. Thus, the particularity of ethics is there is more than one “right” answer. Indeed, without being necessarily mutually exclusive, there are three major approaches to ethics: deontological ethics, utilitarianism, and virtue ethics, and each of them offers a profoundly different outlook on meaning and value (Burton et al., 2017).

“While deontologists focus on duty, and utilitarians look only at consequences, virtue ethicists are more concerned by the overall moral character” (Bowles, 2018, pp.52–125).

Therefore, these three schools of modern ethics ask very different questions when facing a moral dilemma. While deontologists ask what the right rules are and utilitarians wonder what is the greatest possible good for the greatest number, virtue ethicists question themselves about what virtues they demonstrate if they do a particular action (Fig. 2.9). However, what is important is to “consider each problem from multiple angles, to reach a considered judgement about which theory (or which theories in combination) are best suited to describe and address a particular problem, and to consider the effects of possible solutions” (Burton et al., 2017). In the words of Bowles:

“ethical theories aren’t tools so much as lenses through which to see the world […] ethics is more about asking the right questions and discussing the responses. The journey is often as relevant as the destination” (2018, p.80).

Fig. 2.9. Table of comparison of the three major schools of modern ethics (Burton et al., 2017; Bowles, 2018, pp.52–126).

Human-Centred Design: the answer to solving problems in AI?

Design can positively change the way algorithms are developed today. Until now, Al has been the exclusive territory of technologists, but as smart algorithms are more and more intertwined in our everyday products and services, they play a significant role in shaping the human experience, and thus, become the designer’s business too. In the wake of the many problems surrounding the use of AI in products, Danish Experience Designer, Rie Christensen, advocates for the need to get designers involved if they want a chance to continue to influence the design of human experiences positively (2018). Likewise, Tim Brown, CEO of Ideo and one of the biggest proponent of Human-Centred Design, believes design is uniquely poised to explore the possible adverse outcomes of AI-powered products in advance and offer solutions to everything from climate change to social and economic inequality (Brown cited in Budds, 2017).

The marriage of Human-Centred Design and Data Science may help address the problem of algorithmic biases. Indeed, like other big design firms, Ideo, which just bought the Data Science company Datascope (Budds, 2017), states that data science is the new discipline of Human-Centred Design and that ethics are foundational to developing human-centred AI solutions (Ideo, 2018). As Fei-Fei Li advocates for more inclusion and diversity to fight the bias issue (2018), Human-Centred Design, characterised by the “adoption of multidisciplinary skills and perspectives” (Giacomin, 2014), the use of empathy to gain a deep understanding of people and communities’s needs (Ideo, 2015) and the involvement of users throughout the design process (Giacomin, 2014), seems like a good approach. However, there is not yet any evidence that demonstrates that human-centred design has been able to rid technology of bias and some designers believe that ‘design thinking’ has reached its limits of usefulness for solving complex systemic problems, like racial inequality (Budds, 2017; Girling and Palaveeva, 2017; Schwab, 2018). Indeed, there are many examples of digital products and services with a real design fit but failed to take into account broader cognitive and social biases by overlooking or ignoring some populations, also called ‘externalities’. Airbnb is an excellent example of a popular service for hosts and renters that failed to foresee the negative consequences on lower-income residents squeezed out of affordable housing (Girling and Palaveeva, 2017; Coulman, 2018). Furthermore, E.M. Cioran claims that “design is inherently an unethical industry” as he believes that empathy has little relationship with who holds power on making the final decision on an idea or product (cited in Schwab, 2017a).

The interdisciplinary collaboration between human-centred designers and data scientists can better anticipate failure in autonomous systems and mitigate it. Errors in machine learning algorithms come from misclassification. In the case of the immigration fraud detector previously mentioned, some international students may have been classified as not English speakers while they were, and some other may have passed the test while they were not. These two types of errors are respectively called false negative and false positive, and they both can have significant consequences on the people affected by them (Schwab, 2017b). Josh Lovejoy and Jess Holbrook from Google, as long as Daryl Weir from Futurice propose to use the ‘confusion matrix’ to help identify the possible decisions the machine might make, and compares those to the different cases that might happen in reality (Fig. 2.10). Designers, then, need to define which error is the less worst for the user, or has the less impact on the user experience, and pass this information to the data scientist making the algorithm who can favor one kind of error over the other (Lovejoy and Holbrook, 2017; Weir et al., 2017). Lovejoy and Holbrook propose to complete this tool by a testing technique such as the ‘Wizard of Oz’ to verify or discard the assumptions made about the users (2017). This trade-off between precision and recall is an ethical design decision that designers and developers can make together based on their understanding of the users. Although this approach is interesting for designers to practice moral imagination by identifying and prioritising the possible errors, and make more informed decisions about the impact of system’s failure, it does not provide any instruction about what level of inaccuracy would be acceptable for users, and even less solve the problem of inaccuracy in systems that maybe should not have any (Sonnad, 2018). Furthermore, it does not give designers any guidance on how to design for the people who will be affected by these errors.

Fig. 2.10. Confusion matrix (Futurice, n.d.).

While Human-Centred Design has the potential to bring new perspectives and ethical considerations into the design of AI-powered products and services, it is not enough, it needs to “push the reflection about the potential impact of new designs beyond the direct benefit of use by primary users” (Cababa cited in Schwab, 2018) and beyond the “happy paths”. In the words of Bowles: “According to the law of unintended consequences, there will always be outcomes we overlook, but unintended does not mean unforeseeable. We can — and must — try to anticipate and mitigate the worst potential consequences” (2018, p.8). “We have the responsibility to evolve from human-centred design thinkers to humanity-centred designers” (Girling and Palaveeva, 2017).

Moral imagination: towards Humanity-Centred Design

If designers need to think more broadly about the direct and secondary consequences of their work when using AI to minimise the chances of creating more problems than they are trying to solve, they need to ask themselves new questions and train, what Bowles calls, their “moral imagination” (2018, p.19). Thereby, Bowles explains that designers need to develop their ability to imagine and morally assess a range of future scenarios to become better at spotting and addressing unintended consequences and externalities (2018, p.19). Sheryl Cababa from the design agency Artefact similarly suggests that exploring the alternative paths of what could go wrong is a way for designers to start grappling with the ethical issues of their work (cited in Schwab, 2017b).

Fig. 2.11. The “futures cone” (Voros, 2003).

The field of futures-oriented studies provides interesting inspiration to practice moral imagination. To shift designers narrow view on the user to broader perspectives and long-term impacts, Rob Girling and Emilia Palaveeva from Artefact recommends a technique called ‘backcasting’ which starts by “defining a preferable future state then work backwards to identify necessary actions and steps that will connect the future to the present” (2017). Unlike the forecasting approach, with its “futures cone” model (Fig. 2.11), which is more reactive as it is based on dominant trends and used in the context of justification, backcasting is a proactive and multidisciplinary research technique based on problem-solving and used in the context of discovery (Fig. 2.12)(Dreborg, 1996). Thus, this approach seems especially adapted to designers who can unleash their creativity about the negative paths they have to strive to avoid to reach the ideal future previously outlined collectively. Furthermore, Dreborg highlights that backcasting is particularly appropriate when the problem is complex, affecting many sectors and levels of society; when there is a need for major change; when dominants trends are part of the problem; when the problem is a matter of externalities to a great extent; and when the time horizon is long enough to allow considerable scope for deliberate choice (1996). Surely, problems posed by AI could fit into this pattern. Although backcasting is more an approach than a step by step method, it helps to develop new alternative scenarios and describe images of the future with “value-related considerations that lie behind the choice” by highlighting the consequences, pros and cons of different solutions and strategies (Dreborg, 1996).

Fig. 2.12. Comparison between forecasting and backcasting — five levels (Dreborg, 1996).

Virtue ethics and value-sensitive design: imbuing human values into autonomous systems

If designing is about making choices based on value-related considerations, designers of autonomous systems need to think about what kind of values they want their smart systems to show and reflect on their users and society to mitigate the potential adverse effects of AI. However, as there is a lack of universal values for ethical design, there are multiple ways to bring human values and not one single recipe.

Virtue ethics, the third major pillar of modern ethics, can bring great inspiration for designers in defining positive human values to embed into algorithms. Virtue ethics comes from ancient Greek, Confucian, and Buddhist Philosophies of Moral Self-Cultivation and Practical Wisdom and considers that to live well, or “flourish”, we must demonstrate positive virtues in all our choices. Shannon Vallor, author of Technology and the Virtues, opts for twelve ‘technomoral virtues’ to live well with emerging technology including humility, justice, courage, empathy, care, and wisdom (Fig. 2.13)(2018). While this theory might be a little too abstract for design practitioners, Christensen provides a good entry point of how empathy and care could be embedded (2018). To make the user feels like the machine understands his needs and cares for him, she suggests to ask the question “what would my mother do?” to help explore the underlying intentions of users’ actions and see if AI could bring a similar value (Christensen, 2018). Bowles proposes another alternative with the ethical test: “would I be happy for my decision to appear on the front page of tomorrow’s news?” (2018, p.125). This approach and practical examples can help designers to be more reflective about the ethical decisions they make by considering how users and society would perceive their choices.

Fig. 2.13. Technomoral virtue ethics (Vallor, 2018).

The idea of imbuing virtues in technology is echoed in ‘Value-Sensitive Design’, a process that methodically accounts for human values in the design of systems. Value-Sensitive Design is an iterative methodology that starts by asking which values are the most important to the project’s stakeholders (Fig. 2.14) and then maps the potential harms and benefits of using technology for each of them (Friedman et al., 2006). Value-Sensitive Design is in many ways very close to the core theory and tools used in ‘experience design’ but emphasises the analysis of consequences on a wider range of stakeholders, which is lacking in a human-centred design approach. Experience design, as defined by Marc Hassenzahl, is not about technology or interface but about thinking first about what is the desired impact on people, focusing on the consequences of using a prod­uct and how people can be influenced by using it (Hassenzahl, n.d.). By considering a larger range of stakeholders, value-sensitive design addresses the potential value conflicts as “at times designs that support one value directly hinder support for another” such as accountability vs. privacy, trust vs. security, environmental sustainability vs. economic development, privacy vs. security, and hierarchical control vs. democratization (Friedman et al., 206). Bowles proposes a tool called ‘value spectrum’ that can structure the discussion that will emerge in design teams in case of value conflicts by placing a slider between two colliding values (Fig. 2.15). Although Value-Sensitive Design is not a simple step-by-step process, it is an excellent place to start for designing with human values in mind as it can help create and sustain equity with minimum negative impact in the context of system design with many direct and indirect stakeholders.

Although virtues ethics and value-sensitive design offer an appealing optimistic approach, they do not provide a simple recipe that design teams could easily implement in their process. Imbuing values in smart systems will always be part of long discussions about what values matter the most with the aim to reach a consensus that will always be a trade-off, and therefore not perfect for everyone.

Fig. 2.14. Human values (with ethical import) often implicated in system design (Friedman et al., 2006).
Fig. 2.15. Value spectrum (Bowles, 2018, p.127).

Previous & next articles

If you are interested in this topic, you can read the previous articles in this series on the below links, and stay tuned on what is coming next:

The end.

Bibliography

The full bibliography is available here.

Before you go

Clap 👏 if you enjoyed this article to help me raise awareness on the topic, so others can find it too
Comment 💬 if you have a question you’d like to ask me
Follow me 👇 on Medium to read the next articles of this series ‘Designing responsibly with AI’, and on Twitter @marion_bayle

--

--

Service & Interaction Designer, just completed a Digital Experience Design Master @Hyperisland, researcher in ‘Designing responsibly with AI’.