Part 1

Introducing ‘Designing responsibly with AI’

Part 1 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.

Marion Baylé
UX Collective
Published in
8 min readMar 1, 2019

--

Why this study is needed

Artificial Intelligence has been on everyone’s lips the past year and is currently triggering a lot of ethical concerns regarding its impact on society. While many initiatives for developing codes of ethics and principles for a ‘responsible AI’ have started to emerge, there is still a lack of discussion about the practical implementation of relevant ethical considerations within the design process for the teams behind the technology. This study investigates this gap and offers practical recommendations for design teams to have better control over the possible consequences of their designs on people and society.

This report has been realised as a thesis for the Digital Experience Design Masters of Art at Hyper Island UK (Manchester).

What’s in it for you?

  • Increase your knowledge about Artificial Intelligence, its impact and ethical challenges for society
  • Understand the relationship between ethics and design and explore the benefits and shortcomings of Human-Centred Design when working with AI technology
  • Be inspired to lead the change and explore new ethical design approaches when working with AI

Abstract

How might we design AI-powered products or services responsibly?

In recent years, Artificial Intelligence has known lots of euphorias and is increasingly being used in the products and services that shape our daily life whether people are aware of it or not. Smart algorithms can decide whether someone is offered a loan, a job interview, is fired, gets a visa, healthcare benefits, who is a terrorist or who is paroled. While AI latest technological advances have unlocked new levels of productivity and innovation, a range of unexpected consequences on society have caused discrimination and undercut human rights at an unprecedented scale. This paper explores some of the impact AI currently has on society and examines its benefits and adverse consequences to identify the major ethical challenges when designing autonomous and intelligent systems. The relationship between ethics and design is examined to outline the advantages and shortcomings of Human-Centred Design in the development of automated decision systems with AI technology, and offer inspiration for alternative ethical approach and design process that take humanity’s needs into account. The application of ethical considerations throughout the design process when using AI technology is scrutinised to identify the challenges and opportunities for better supporting design teams in foreseeing and mitigating unintended consequences on society. Academic research and interviews of design and technology experts are analysed and synthesised into practical recommendations for a more ethical approach to the design process. These recommendations are translated and tested through an explorative process with a series of tools based on human values to imagine worst-case scenarios in order to become more mindful of the possible negative impact of new designs on humans and society when using AI technology.

Introduction

In recent years, Artificial Intelligence has known lots of euphorias and is increasingly being used in the products and services that shape our daily life and work whether people are aware of it or not. Smart “algorithms silently structure our lives” (Martin, 2018). They not only decide what shopping recommendations to give you on Amazon or what news to show in your social media feeds, algorithms can also predict if you get a home loan (Kharif, 2016), what you will pay (Angwin et al., 2016a), if you get a job interview (Goodman, 2018), if you are fired (O’ Neil, 2016), if you get a student visa (Sonnad, 2018), if you get healthcare benefits (Lechter, 2018), who is a terrorist (Picheta, 2018), and even if you are paroled and how you are sentenced (Angwin et al., 2016). These autonomous systems sift through big data sets to make predictions and take all kind of decisions to the extent that it is governing our society. While “data has become one of our most precious resource” (Spohrer and Banavar, 2015), “algorithms have made data useful” (Norman, 2017).

The advances in Artificial Intelligence has enabled “unprecedented automation of tasks long thought undoable by machine” (Norman, 2017), unlocking new levels of productivity and innovation. While machines are doing mundane, repetitive or time-consuming tasks that can free up valuable human time for more complex or meaningful endeavours, smart assistants enhance human capabilities, making us able to make sense of large amounts of data and make better decisions. However, artificial intelligence is raising many moral concerns about its societal impact. Algorithms are unpredictable, unfairly biased, inscrutable, flawed, and yet taking decisions in high stakes domains, causing a range of unintended consequences from new forms of discrimination and prejudice to undercutting human rights and autonomy. The truth is that “even the most benign, well-intended acts can have unexpected impacts” (Bowles, 2018, p. 7). As Paul Virilio said: “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution… Every technology carries its own negativity, which is invented at the same time as technological progress” (cited in Bowles, 2018, p.7). So, when we invent artificial intelligence, we might not create specific harm, but we still automate a lot of existing ones at scale.

“We know AI is going to change the world, but who is going to change AI?” (Li, 2018). This question echoes the loud calls to fix all the ethical issues caused by AI and brings responsibility back to its creators. Therefore, understanding how to design AI-powered products and services responsibly is paramount. Historically the exclusive territory of technologists, AI is becoming designers’ job too when considering the significant role smart algorithms have in shaping the human experience of products and services. Human-Centred Design might be uniquely poised to explore the possible adverse outcomes of AI-powered products in advance, though it also proved some shortcomings regarding considering humanity’s needs. Thus, developing an ethical AI requires to understand the issues that AI technologies can bring in the long term and apply new design considerations to have better control over the possible consequences of new designs on society. As working with AI requires to think and design products and services differently, adapting the design process, including the interdisciplinary collaboration between designers and data scientists, is needed to ensure good practice for the future of designing with AI technology.

This paper first explores the state of artificial intelligence today and some of its positive and adverse impact on society to identify the central ethical dilemmas when designing autonomous and intelligent systems. The relationship between ethics and design throughout the design process is scrutinised to outline the challenges and opportunities for a responsible AI design. Academic research, alongside interviews of industry practitioners in both design and technology fields, is analysed and synthesised into practical recommendations for a more responsible approach to the design process when using AI technology. These recommendations are translated and tested through an explorative process and tools with the aim of supporting design teams to better foresee and mitigate unexpected consequences on society.

Aims

The aims of this project are three-fold:

- increase awareness among industry practitioners about AI technology and its ethical challenges for the future of our society

- develop consciousness about ethical considerations in design decisions within the design process

- contribute to the dialogue between academics and industry practitioners on how to develop best practice when designing with AI technology

The underlying intention is to shift technologists and designers’ perception of ethics as a rigid and tedious way of thinking to a mode of innovation ensuring long-term benefits for both businesses and society alike.

Research questions

- What impact is AI currently having on society?

- What are ethical considerations relevant to design responsible AI-powered products or services?

- How might ethical considerations be practically applied to the design process to guide design teams when working with AI technology?

- How might we design AI-powered products or services responsibly?

Limitations of study

My academic background and industry experience are in Design and cover Human-Centred Design, but I have no prior experience in either ethics or artificial intelligence. Most of my knowledge on AI and ethics has been gathered throughout this research by reading, interviewing experts, following the online course “Elements of AI” from the University of Helsinki, and attending the Techfestival in Copenhagen about technology and humanity. Given the overall duration of 5 months, it cannot be expected that I will reach an expert level of understanding in any of these fields. I see this research project as an initial exploration of these topics in a much longer journey throughout my career.

Research on the impact of AI on society is still very much in its infancy, and AI ethics is an area that only garnered lots of attention over the last two years (IEEE, 2017). Given the scope of the AI field, the range of unexpected consequences on society has been reduced to those caused by automated decision systems and does not cover the ones from social media platforms. Besides, although the nature of the topic is global, the research was undertaken through a Western point of view as all the sources come from Europe and the US, and not Asia. It can be explained as the culture around ethics is perceived very differently in Eastern countries like China which is known for favouring outcomes for process compared to Europe (Rolver and Lundberg, 2018).

Terminology

In this paper, ‘AI’ refers to ‘Artificial Intelligence’ and is used broadly to cover various applications including machine learning while ‘ML’ exclusively designates ‘Machine Learning’. The terms ‘algorithms’, ‘autonomous and intelligent systems’, ‘autonomous systems’ and ‘automated decision systems’ are used intermittently to refer to ‘Artificial Intelligence’ applications.

The term ‘designers’ refers to anyone involved in the design, facilitation or research within a project, while the terms ‘technologists’, ‘data scientists’ and ‘developers’ are used intermittently to refer to anyone involved in the creation of code and the manipulation of data with a computer science background. ‘Design teams’ and ‘practitioners’ are used to include both designers and technologists and refer to anyone involved in either design or code or both.

Next articles

If you are interested in this topic, you can read the other articles in this series on the below links:

more to come, stay tuned 📺

Bibliography

The full bibliography is available here.

Before you go

Clap 👏 if you enjoyed this article to help me raise awareness on the topic, so others can find it too
Comment 💬 if you have a question you’d like to ask me
Follow me 👇 on Medium to read the next articles of this series ‘Designing responsibly with AI’, and on Twitter @marion_bayle

--

--

Service & Interaction Designer, just completed a Digital Experience Design Master @Hyperisland, researcher in ‘Designing responsibly with AI’.