Part 6

Challenges of designing responsibly with AI: how ethical considerations can be applied to the design process

Part 6 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.

Marion Baylé
UX Collective
Published in
8 min readSep 22, 2019

--

Recap of previous parts

As part of the series ‘Designing Responsibly with AI’, this article follows parts 1 to 4, which form the literature review of this research. Here is a recap:

Overview of part 6

Synthesising primary and secondary research, this 6th article provides some answers to the initial research questions:

  • What impact is AI currently having on society?
  • What are ethical considerations relevant to design responsible AI-powered products or services?
  • How might ethical considerations be practically applied to the design process to guide design teams when working with AI technology?
  • How might we design AI-powered products or services responsibly?

What impact is AI currently having on society?

The first part of the literature review depicts the role AI technology has today as long as some of its risks and negative impacts on society. The table below (Fig. 3.4) summarises the themes addressed.

Fig. 3.4. Summary of the impact of AI on society today

This review of the adverse consequences of AI on society only covers the implementation of automated decision systems. It does not encompass any of the issues related to the impact of social media platforms such as filter bubbles, echo chambers of public opinion, data privacy, mass surveillance or discriminatory ads. It also does not provide either any insights regarding the threat of misuse by bad actors or criminals and the danger of a jobless future.

Far from being exhaustive, this review only aims to provide the necessary context for answering the second question ‘What are ethical considerations relevant to design responsible AI-powered services?’

What are ethical considerations relevant to design responsible AI-powered products or services?

The second part of the literature review portrays the major ethical dilemmas that AI technology pose to society and specifies some directions for redress. Furthermore, the final part of the literature review scrutinises the human-centred design process to highlight its strengths and weaknesses in addressing the challenges brought by AI technology.

The table below (Fig. 3.5) recapitulates the three main ethical challenges of AI along with some potential directions for redress.

Fig. 3.5. Recap of the AI ethical challenges and potential directions for redress.

How might ethical considerations be practically applied to the design process to guide design teams when working with AI technology?

The final part of the literature review along with the interviews of industry experts highlights some existing strategies and tools involving ethical considerations when working with the complex and uncertain nature of AI technology. Furthermore, expert interviews reveal how designers and technologists perceive ethical concerns related to the use of AI technology and how ethical considerations are applied during the design process.

The following diagram (Fig. 3.6) maps out the principal tools involving ethical considerations used by practitioners when working with AI technology across the Human-Centred Design process.

Fig. 3.6. Principal tools used when working with AI involving ethical considerations mapped onto the stages of a design process.

The analysis of primary and secondary research enables to uncover some of the practical challenges of designing responsibly with AI technology.

- There are a range of unintended consequences — that can happen when technology fails or misbehaves, succeeds beyond expectations, is used in unexpected ways, or when some populations are overlooked or ignored in the design — but they are difficult to foresee by practitioners as they do not have a specific process or tools to help them imagine worst-case scenarios for long-term consequences on society.

- AI ethical challenges are tremendous and specific to the technology, but practitioners have low awareness about AI and are not conscious of all the ethical decisions they make during the design process and continue to apply the same ethical considerations as they usually do with any other technology.

- Ethical consideration mostly arises in the last stage of the design process after the generation of ideas as ethics is generally perceived as dull and rigid. Thus, ideation and ethics do not seem to go together, but once ideas are developed, it is often too late or too costly to correct the course of actions.

- When considered, ethics is done as a reflective exercise where practitioners ask questions to themselves and rely on rules of thumbs to validate if their solutions are ethical or not. However, this is only done at an individual level, disregarding the principle in Human-Centred Design that designers are not the user (Kendall, 2018), and without any guidance on what types of questions are relevant to the challenge at hand and if they cover the whole spectrum of potential issues.

Synthesising the findings enables to define eight opportunity areas that form the basis of requirements for this question to be answered.

Thus, any possible implementation of ethical considerations should:

- be collaborative, fostering shared understanding, discussions and consensus across a diverse multidisciplinary team of both designers and technologists;

- bring visibility, allowing ethical challenges to be more tangible;

- meet humanity’s needs, widening the lens of practitioners on externalities;

- engage moral imagination, facilitating the development of alternative future scenarios;

- be informative, providing practitioners with the means to be more aware of AI ethical challenges and assess alternative futures;

- bring human values at the forefront, enabling practitioners to analyse possible consequences, harms and benefits of AI technology through a value lens;

- foster reflection by facilitating practitioners ask ethical questions;

- be early in the design process in the context of discovery, empowering practitioners to be proactive.

Additionally, the successful application of these recommendations should also take into consideration the following criteria:

- be clear to structure ethical discussion in the uncertain and ambiguous context of AI technology;

- be simple by taking inspiration of the tools practitioners already know to not create more complexity and confusion.

How might we design AI-powered products or services responsibly?

The analysis of existing literature and interviews enables a clearer understanding of the most pressing challenges and reveals three significant pillars to enable practitioners to design responsibly with AI.

Human values are central to the design of intelligent systems. Diversity in design teams has a clear relationship with algorithmic bias and can be developed through value-related considerations of ethical issues. The more diverse the team will be, the closer it will represent humanity, and the more likely ethical problems will be explored from multiple perspectives. Diversity is foundational to Human-Centred Design, and like humanity, the three approaches of modern ethics have a profoundly different outlook on meaning and value. Furthermore, value-sensitive design presents a systematic approach in the design process to address human values in technology.

Moral imagination is essential to a more responsible approach to the design of AI-powered products or services. Practitioners can make innovation social progress by systematically exploring the potential adverse consequences of their design. Moral imagination can help them develop alternative images of the future and scrutinise worst-case scenarios. Drawing inspiration from the backcasting approach, practitioners could better anticipate unintended consequences and externalities by widening their lens and using their problem-solving to address potential ethical issues earlier in the design process, when ideas are generated.

Mindfulness is crucial to design with AI responsibly. Developing a better awareness of both the AI ethical challenges and all the ethical decisions arising during the design process can help design teams be more conscious of their choices and make more informed and considerate decisions. Interdisciplinary collaboration can make design teams more mindful when addressing system failure by analysing the impact of different errors and limiting the negative consequences on the user experience. Likewise, human errors due to too much trust in assistive tools can be mitigated if practitioners are mindful of the user impact over time. Opening the collaboration to experts from other disciplines such as human psychology or human cognition might help better understand the potential impact of relying more on machines and develop systems that truly enhance human intelligence without eroding human skills.

Although these findings provide a direction for responsible use of AI technology in design, assessing them against the research question cannot be fully answered. However, how exactly can one say that an approach including these three pillars would ensure more responsible AI-powered products or services than another? Especially when negative consequences can occur many years after the launch of the product or service, and it is always tricky to link adverse consequences to only one factor. Same goes with positive consequences. The following chapter will take these learnings to develop an explorative prototype of a process and toolkit for design teams. This method will be tested with practitioners with the aim of evaluating and discussing its adequacy for the challenge at hand.

Next articles

If you are interested in this topic, you can read the other articles in this series on the below links:

The end.

Bibliography

The full bibliography is available here.

Before you go

Clap 👏 if you enjoyed this article to help me raise awareness on the topic, so others can find it too
Comment 💬 if you have a question you’d like to ask me
Follow me 👇 on Medium to read the next articles of this series ‘Designing responsibly with AI’, and on Twitter @marion_bayle

--

--

Service & Interaction Designer, just completed a Digital Experience Design Master @Hyperisland, researcher in ‘Designing responsibly with AI’.