Part 3

Ethical dilemmas of AI: fairness, transparency, collaboration, trust, accountability & morality

Part 3 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.

Marion Baylé
UX Collective

Artificial intelligence presents new and unique challenges to ethics and morality (IEEE, 2017).

Fairness & Transparency

While automated decision systems have the potential to bring more efficiency, consistency and fairness, it also opens up the possibility of new forms of discrimination which may be harder to identify and address. The opaque nature of machine learning algorithms and the many ways human biases can creep in, challenge “our ability to understand how and why a decision has been made” and our capacity of guaranteeing fundamental values of society, such as fairness, justice and due process rights (United Nations, 2018; Martin, 2018).

As seen in the first part, AI can improve efficiency by enabling firms and employees make sense of large amounts of data and, thereby make more informed decisions in a shorter period (Wladawsky-Berger, 2018). Indeed, both Brynjolfsson (2017) and Hume (2018) take the example of cancer diagnosis to argue that AI not only makes people more productive but also help them do their job better than they could do before by minimising bias and error. Systems trained to diagnose cancer are reaching higher accuracy rates than a radiologist by collecting the knowledge that thousands of doctors have made judgment calls in the past and transfer that into a mathematical formula (Hume, 2018). By filtering through all the images and only selecting the troubling ones, machines can relieve doctors from some cognitive load who do not have to sort them all and potentially overlook some and make mistakes (Brynjolfsson, 2017). The belief that algorithms can outperform expert judgment by being neutral, or less biased than humans, is shared by Nobel laureate Daniel Kahneman, who argues, at the Toronto conference on the Economics of AI, that the decision-making process of humans is “noisy” and therefore should be replaced by algorithms “whenever possible” (cited in Pethokoukis, 2017). In the words of Jim Guszcza, “just as eyeglasses compensate for myopic vision, data and algorithms can compensate for cognitive myopia” (2018).

However, there are many counter-examples (see previous part) which demonstrate how biases sneak in training data and how machine learning mechanisms reinforce them, causing more discrimination and injustice. In response to the problem, IBM, Facebook, Microsoft, and others all released “bias busting” tools earlier this year to expose and try to mitigate bias — sending more AI to fix AI — however, addressing bias requires more than a technological fix but an understanding of the underlying structural inequalities (Whittaker et al., 2018; United Nations, 2018). Whether explicit or implicit, biases are the symptom of a lack of diversity within the people who build the technology (Li, 2018). Indeed, women and minority groups remain underrepresented in the technology field which makes it harder to represent humanity and overcome biases correctly. Fei-Fei Li advocates for more inclusion in AI education to make sure that the people behind the technology, the technologists, better represent humanity and thus, carry the kind of values we collectively care about as a society (2018). “As technology is not value-neutral, it needs to be built and shaped by diverse communities in order to reduce adverse social consequences” (United Nations, 2018).

As it will take time to fix the bias issue, there is a loud call for transparency and explainability to make black-box models comprehensible to those affected by it. There are lots of open questions regarding what constitutes a fair explanation and what level of transparency is sufficient, as well as transparent to whom and for what purpose (Matsakis, 2018). In fact, transparency may be neither feasible nor desirable (Ghani, 2016). Too much transparency as letting people know how decisions are made can allow them to “game” the system and orient their data to be viewed favourably by the algorithm (Gillepsie, 2016). “Gaming to avoid fraud detection or avoid SEC regulation is destructive and undercuts the purpose of the system”. Also, transparency may be different if the purpose is to identify unjust biases or ensure due process (Martin, 2018). Sandra Wachter, along with Brent Mittelstadt and Chris Russel, argues that algorithms should offer people “counterfactual explanations”, or disclosure of how they came to their decision and provide the smallest change “that can be made to obtain a desirable outcome” (2018). In the example of an algorithm refusing someone a home loan, it should tell the person the reason, like too little savings, but also what he or she can do to reverse the decision, in that case, the minimum amount of savings needed to be approved (Matsakis, 2018). However, providing explanations alone does not address the heart of the problem: knowing which features of the data are used by automated systems to make a decision — and whether or not they are appropriate for the decision at hand (Fig. 2.6)(Martin, 2018).

Fig. 2.6. Transparency model proposal by adding in missing masses to algorithm decision-making process (Martin, 2018).

Albeit there is a real wake up call for algorithmic fairness and transparency, the technology field fear that requiring this technology to be explainable to ensure fairness will only slow down progress in maximising AI efficiency and accuracy (United Nations, 2018; Ananny & Crawford, 2016; Jones, 2017; Kroll et al. 2017). Besides, reaching algorithmic fairness does not address a broader problem of a society that might be unjust. In June this year, ICE modified its own risk assessment algorithm so that it could only produce one result: the system recommended “detain” for 100% of immigrants in custody (Whittaker et al., 2018; Matsakis, 2018). Maybe the real question is how do we build fair algorithms in an unfair society? Moreover, if we do, will society be ready to adopt them?

Human-Machine collaboration & Trust

While AI is increasingly applied to more and more industry every day bringing convenience and efficiency at scale, it also brings the risk of a jobless future where human skills and autonomy are challenged and threatened by machines. The mass-automation capacity and the way businesses use it feel like a race against the machines, where businesses look at who will be the best at solving a particular task. However, as Kevin Kelly phrases it: “this is not a race against the machines. If we race against them, we lose. This is a race with the machines” (2016). From lightly technology-augmented employees to fully automated jobs, the key to the future of work is in human-machine collaboration (Grownder, cited in Wladawsky-Berger, 2018).

As developed in the previous part, AI can be used in different ways. While some automation replaces the work that people do, other enhances the work of people, making them more capable and competent (Norman, 2017) or weakened and less prepared to take control when technology fails (Nicholas Carr, 2013). In response to the fear of human errors when working with automated systems, theorists like Kevin Kelly argue, in the case of the autopilot, that humans pilots should be entirely replaced by a fully autonomous autopilot, curing imperfect automation with total automation (Carr, 2013). However, as no machine is infallible and they will have to operate in an imperfect world, this theory is unrealistic. To make sure AI will have the expected positive impact, Google, as well as design firms Ideo and Futurice, call for focusing AI on enhancing and augmenting people’s capabilities rather than purely replacing them. While this movement has different names: “Human-Centred AI” at Google (Li, 2018), “Augmented Intelligence” at Ideo (2018), and “Intelligence Augmentation” at Futurice (Weir et al., 2018), they are all aligned on the same objective of making AI-powered technologies grounded in human needs to assist and extend human capabilities. If this might help humanity keeps jobs in the future, it will make people rely on machines even more than they already do today, challenging our autonomy and capacity to maintain and develop expertise in an exponential pace of technological advances. Therefore, assistive technologies need careful design considerations to enhance people’s capabilities without eroding their skills. Nicholas Carr presents some simple ways to temper automation’s ill effects by programming software to shift control back to human operators at frequent but irregular intervals, which keeps people engaged, and promotes situational awareness and learning (2013). Furthermore, he suggests incorporating educational routines into software, requiring users to repeat difficult manual and mental tasks that encourage memory formation and skill building (Carr, 2013).

Fig. 2.7. The range of hybrid activities called “The missing middle” (Daugherty and Wilson, 2018, p.8).

When working together, humans and machines have the potential to create a superior, collaborative system, and achieve a better outcome than either alone (see previous part). This belief lies in the idea that systems should be designed by using the strengths of both humans and machines. Norman claims that machines should do tasks that require processing information quickly and do the math, things that machines are good at while letting people focus on more creative tasks requiring their critical analysis of the context and environment, things that people are good at (2017). This collaboration between humans and machines opens up a range of hybrid activities explained in details by Daugherty and Wilson: in some cases, humans complement the work of machines, and in others, AI gives humans superpowers (Fig. 2.7). From lightly technology-augmented employees to more automated jobs, this points up that machines are designed with a specific delegation in mind to do a particular role within “the team”. This delegation of roles between humans and machines as who-does-what raises lots of new considerations for designers who need to think about the implications of delegating roles and responsibilities to machines within a larger decision context (Martin, 2018).

While automation has the potential to make us more human by taking off the tedious and repetitive tasks humans are not good at, “it will require us to be more critical and reflect on our practice to find where our human intelligence will be necessary” (Hume, 2018). Another critical aspect to research is how individuals are impacted by being part of the algorithmic decision-making process with non-human actors in the decision (Martin, 2018).

Accountability & Morality

While latest advances of AI enable the delegation of new roles to algorithms within society, it also brings new unfortunate social consequences when the technology fails or misbehaves. Accountability is vital for establishing avenues of redress, and thereby, protect human rights and dignity (United Nations, 2018), but current conversation absolves firms of responsibility. The inscrutable and unpredictable nature of machine learning algorithms and the difficulty in anticipating the adverse effects on individuals or societies challenge the traditional concept of accountability as well as the moral decision of delegating certain decisions to machines.

Accountability is complicated because “technologies tend to spread moral responsibility between many actors” like a car crash requires an investigation of multiple factors like what the different people involved in the accident were doing, the state of the car’s brakes and who performed its last service (Bowles, 2018, p.12). Besides, although the bias problem starts to be acknowledged by the industry, firms and developers argue that their algorithms are neutral and “so complicated and difficult to explain that assigning responsibility to the developer or the user is deemed inefficient and even impossible” (Martin, 2018). Furthermore, machine learning capacities defy the traditional conception of designer responsibility as algorithms “learn” from the data rather than being 100% coded directly by developers (Mittelstadt et al., 2016). However, this does not change that when technologists create an algorithm to perform a task, they make a conscious choice to delegate a specific role and associated responsibility to the algorithm. Thereby, they not only take responsibility for the decision but also the harms created, principles violated, and the rights diminished by the decision system they created. Whether firms acknowledge it or not, accountability is a design choice, and when delegating the responsibility of a decision to an algorithm, it precludes users from taking responsibility for the ethical implications and places the responsibility of the ethical implications on the firm who developed the algorithm (Martin, 2018).

One possible avenue for developing autonomous intelligent systems capable of following social and moral norms is to “identify the norms of the specific community in which the autonomous systems are to be deployed and, in particular, norms relevant to the kind of tasks that the autonomous systems are designed to perform” (IEEE, 2017). Martin completes by recommending to also “define the features appropriate for use, and the dignity and rights at stake in the situated use of the algorithm” (2018). When creating autonomous agents, developers express “how things ought to be or not to be, or what is good or bad, or desirable or undesirable” (Kraemer et al., 2011). However, when “machines engage in human communities as autonomous agents, then those agents will be expected to follow the community’s social and moral norms” (IEEE, 2017). It implies that developers should know what the specific norms that apply to a certain community are to develop algorithms that respect them. Therefore, designers need to clearly define the “delineation of the community in which the autonomous intelligent systems are to be deployed” as “relevant norms for self-driving vehicles, for example, will differ greatly from those for robots used in healthcare” (IEEE, 2017).

Fig. 2.8. Firm responsibility for algorithms (Martin, 2018).

Another possible way for future corporate responsibility is to define what level of accountability is appropriate within the decision context (Martin, 2018). In other words, the level of responsibility of an algorithm should depend on its application. As algorithms are increasingly used in the distribution of social goods such as education, employment, police protection or medical care, they can decide to terminate individuals’ Medicaid, food stamps, and other welfare benefits as well as the “adjudication of important individual rights” (Citron, 2007). However, as Hume stresses, the importance of understanding what is inside the black-box depends on the application, as a bad Amazon recommendation does not have the same drastic consequences than someone who gets refused a home loan (Hume, 2018). To adjust the level of accountability, Martin suggests a framework that links the role of the algorithm in a decision with the responsibility of the firm” (Fig. 2.8). Thereby, an algorithm having a significant role in a pivotal decision in the life of individuals, such as sentencing or allocation of medical care, would be treated differently than an algorithm taking a significant role in a decision of minimal societal importance like deciding where to place an ad online (Martin, 2018).

Although redefining the concept of responsibility is a good step towards a more responsible design of autonomous systems within society, firms developing algorithms need to be mindful of indirect biases as ethical implications of algorithms are not necessarily hard-coded in the design (Martin, 2018). Moreover, according to Collingridge dilemma, “attempting to control a technology is difficult…because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development; but by the time these consequences are apparent, control has become costly and slow” (1980, p.19)

Next articles

If you are interested in this topic, you can read the other articles in this series on the below links:

more to come, stay tuned 📺

Bibliography

The full bibliography is available here.

Before you go

Clap 👏 if you enjoyed this article to help me raise awareness on the topic, so others can find it too
Comment 💬 if you have a question you’d like to ask me
Follow me 👇 on Medium to read the next articles of this series ‘Designing responsibly with AI’, and on Twitter @marion_bayle

No responses yet

What are your thoughts?