AI today: definition, use cases, risks, and unexpected consequences on society
Part 2 of the series ‘Designing responsibly with AI’ to help design teams understand the ethical challenges when working with AI and have better control over the possible consequences of their design on people and society.

Defining AI
Commonly called AI, Artificial Intelligence has been on everyone lips the past years, and nowadays, there is not a day without hearing about it in the media. However, what people mean when they talk about AI is very inconsistent. Discussing the topic itself never happen without having to start by explaining what each other mean by it. Indeed, defining Artificial Intelligence is not an easy task as there is no official agreed definition we can refer to, even among AI researchers.
AI is a loaded term which has been twisted by the heritage of science fiction. When referring to AI, people often picture in their mind robots or other humanoid beings who, in some cinematic work, are friendly and serve humans or, in other cases, turn evil and want to kill all humans to take control of our planet (Elements of AI, n.d.). Fei-Fei Li, Director of Stanford AI lab, claims that the myth of the terminator coming next door is, in fact, a real crisis for the development of the AI field as it highlights the public misreading of the technology but also reveals the fear of what are the intentions of the people behind the technology (2018). Thus, a better understanding of AI is crucial to its future development and progress.
AI is, in fact, an ever-evolving term which is one of the reasons that it means very different things to different people. Artificial Intelligence is hard to define because the field has been redefined continuously with the advances of technology and the ambiguity of what we consider as “intelligent”. Katheryn Hume, Vice President of Product and Strategy at integrate.ai proposes the definition of her former colleague,
Hilary Mason: “Whatever computers can’t do until they can”,
which is more of a psychological take on the definition but succeeds to incorporate the notion of progress and development (2018). She states that what counted as AI ten to fifteen year ago is today viewed as standard old technology like Google Maps which runs complex machine learning algorithms. She highlights that autonomous cars today are considered AI because we are on the cusp of implementing them but in ten to fifteen years, they will also become mainstream, and we will move on to the next challenge of what qualifies as AI (2018).
AI is not new; it is a computer science field of sixty years old which encompasses other related fields such as machine learning and deep learning (Fig. 2.1). The term itself was coined by Professor John McCarthy in 1956 who was debating with a group of computer scientists whether a computer could think and imitate human-like intelligence (Stone et al., 2016). Three years later, Arthur Samuel, a pioneer in Artificial Intelligence research who was building a computer program to play checkers, coined the term “machine learning”: the field of study that gives computers the ability to learn without being explicitly programmed (McCarthy and Feigenbaum, n.d.). Since then, the growth of AI and machine learning has been intermittent and mostly confined to research labs, but in recent years they found their way in practical business applications and started to make a significant impact on the industry (Daugherty and Wilson, 2018, p.43).

The field of AI has significantly blossomed in recent years under the convergence of three forces. The advances of mathematical tools, machine learning and deep learning, as well as the advancement of computing hardware combined with the explosion and availability of data, have considerably boosted the development of AI to a whole different level (Li, 2018; Brynjolfsson, 2017; Stone et al., 2016). Erik Brynjolfsson, MIT Sloan School professor, stresses that the combination of these three critical ingredients has enabled in some applications a millionfold improvement reaching a better accuracy than humans (2017).

Thanks to the advances of machine learning that have revolutionised the field, AI now works completely differently than before. Previously, engineers needed to code each rule such as “if this then that”, but now computers can learn from examples and figure it out the rules on their own without being explicitly programmed using sources as varied as text, images, video and speech (Hume, 2018). If given enough data, machine learning algorithms can predict, personalise, recognise and uncover structure in the data to provide insights or identity anomalies (Weir et al., 2017; Drozdov, 2018)(Fig. 2.2). “Puppy or Muffin” (Fig. 2.3) is a good example that illustrates what AI can do in image recognition today as it has reached a better accuracy than humans with less than 5% error rate (Brynjolfsson, 2017).

This recent breakthrough has led to many useful applications that might make people think that AI is reaching some super intelligence but in reality, AI today is “narrow” or “weak”. Hume compares algorithms to the idiots of ants which can be super intelligent on one very, very narrow task, like diagnosing lung cancer better than a doctorate radiologist with a PhD which feels particularly hyper-intelligent because it is not layman knowledge (2018). Brynjolfsson underlines that the biggest weakness in machines is their need of thousands or even millions of tagged data to be able to do a good job at recognising the difference between a cat and a dog whereas a two-year-old would probably learn after one or two times (2017). A famous say by an AI expert in the nineties well illustrates another weakness of AI and still reflects today’s state:
“the definition of today’s AI is a computer that can make a perfect chess move while the room is on fire”.
It points out that AI is only data-driven and lacks the contextual awareness, the holistic understanding, the nuances and lot of complexity of human intelligence (Li, 2018).
While AI has tremendous potential and already many applications for the capabilities of Artificial Narrow Intelligence, the truth is that AI is far from being able to generalise knowledge as humans do. Journalism epitomises the capabilities and limitations of these tools which can write company earnings reports, hyper-targeted weather reports or even police reports, but they cannot do any real investigative journalism which requires critical thinking, interpretation and emotion (Hume, 2018). Hence, the terms “Artificial General Intelligence” and “Artificial Super Intelligence” belong only to the domain of science-fiction movies, as being able to exhibit human intelligence or even surpass it in all aspects — from creativity to general wisdom to problem-solving — will require machines to experience consciousness (Jajal, 2018). It is why Brynjolfson advocates for partnerships of humans and machines to be the most successful in business (2017).
While AI key characteristics are autonomy and adaptivity, it is still built by humans using human knowledge to train algorithms. AI can perform tasks in complex environments without constant guidance by a user and improve performance by learning from example (Elements of AI, n.d.). However, when training a system to diagnose cancer, it is the knowledge that thousands of doctors have made judgment calls in the past which are collected and transferred into a mathematical formula (Hume, 2018). It points up that humans remain the creator and therefore they have the power to frame the narrative and decide what they want the technology to do.
What is AI for? Business applications
Albeit Artificial Intelligence sounds quite futuristic for many people, it is already there making a massive impact in the industry and people’s lives. Smart algorithms have found their way into a lot of current applications transforming the way we work and live. While AI is rolled out in workplaces using robotic processing automation systems like bots, it is also widely spread in our daily lives, from shopping recommendations to social media personalisation to smart assistants and soon self-driving cars. Enabling “unprecedented automation of tasks long thought undoable by machine” (Norman, 2017), AI new capabilities provide the means to reach new levels of productivity or deliver services in entirely new ways.
A natural starting point with AI is the automation of mundane, repetitive or time-consuming tasks that can be done faster by machines. “Machines take tasks off human employees’ plates” while humans oversee and complement the work of machines when necessary (Wladawsky-Berger, 2018; Daugherty and Wilson, 2018). Software robots, commonly called “bots” capture explicit human knowledge to perform tasks such as processing changes of address, insurance claims, hospital bills or human resources forms. They are ideally used to “free up valuable human time for more complex, meaningful, or customer-facing tasks” (Guszcza, 2018). However, to be efficient, these tools running on autopilot much of the time need to ensure an adequate handoff from computer to human when they require human intervention in exceptional or ambiguous situations (Guszcza, 2018).
When not used for automation, AI has vast potential in enhancing or augmenting people’s capabilities. In some cases, machines are taking part of the workload to assist humans in execution support given them superpowers (Wladawsky-Berger, 2018; Daugherty and Wilson, 2018). A tool like Eva (Fig. 2.4) epitomises this use case as this AI assistant uses speech recognition and natural language processing to listen to participants in a meeting, records and transcribes their conversations, turns them into actions and delivers them in the mailbox or other applications of the appropriate employees (Voicera, n.d.).

In other cases, AI algorithms can provide information to help employees act by extracting actionable insights from the data, or even deciding which data sets to analyse (Wladawsky-Berger, 2018). Udacity is a great example: the company built a bot to advise salespeople and help them perform better during a call. Instead of replacing people, the bot, which uses data from their best salespeople to teach the algorithm, enables 50% more success and helped people learn more rapidly (Brynjolfsson, 2017). Another example is tools that can help doctors diagnose breast cancer like the LYmph Node Assistant, or LINA, developed by Google that can act as a sort of “spell check” for pathologists. In addition to make doctors more productive, AI can help them doing their job better than they could before as researchers found the pathologists who were given the tool performed better than both pathologists who did not get the tool and the tool used on its own to pick up cancerous cells on an image (Ramsey, 2018). This finding suggests that “the human mind is far more powerful when coupled with the smart tool” and “the combination is far superior to either one alone” as Don Norman states (2017). However, these tools need to be designed with a deep understanding of the strengths of both people and technology to create a superior, collaborative system (Norman, 2017).
While efficiency is often the first goal for companies, AI can also enable a new type of innovation and led to entirely new ways of delivering services by taking advantage of real-time user data. In their book “Human + Machine, Reimagining Work in the Age of AI”, Paul R. Daugherty and H. James Wilson explore this new thinking by taking the example of Waze. This GPS mobile application re-routes drivers through traffic to avoid slow-down by using “real-time user data — about drivers’ locations and speeds as well as crowd-sourced information about the traffic jam, accidents, and other obstructions — to create the perfect map in real time”. While the ‘old’ approach was merely digitising static paper-map route, Waze completely reimagined traditional processes by combining “AI algorithms and real-time data to create a living, dynamic, optimised map” (2018, p.6). Similarly, Kathryn Hume exposes a tax advice service she worked on for a big accounting firm. When previously tax advice was only relevant the day they were given to a client because of the frequent shifts in regulations and opinions, the application of AI led to a new business model based on a subscription model giving clients dynamic and updated advice in real-time (2018). Accordingly, the design firm Futurice calls AI “a real-time dance of human and machine intelligence” (Weir et al., 2017). These new types of innovation are increasingly introducing new design challenges that not only transform service processes but also business models as they need to create the structure that can allow nimble and relevant satisfaction of customers needs (Norman, 2017).
Risks and unexpected consequences on society
Although AI has started to show a positive impact on the industry, it also comes with many risks “when the technology fails, succeeds beyond expectations, or simply used in unexpected ways” (Bowles, 2018, p.8). Only in 2018, a range of unexpected adverse consequences has affected society at many different levels (Fig. 2.5).

Bad algorithmic decisions threatening human rights and safety
Although latest advances in AI improved accuracy and efficiency, smart algorithms are inherently uncertain as there is no machine learning technique 100% accurate except for minor problems (Weir et al., 2017). It means that sometimes errors happen, and this is especially problematic when algorithms make bad decisions on important things (Brynjolfsson, 2017). Only this year, there are many examples where AI systems have failed while being tested on live populations in high stakes domains. In March, a self-driving Uber car kills a pedestrian in Arizona (Wakabayashi, 2018); in May, a flawed algorithm using voice recognition system to detect immigration fraud led the UK to deport thousands of students in error (Sonnad, 2018); and in July, it was reported that IBM Watson recommended ‘unsafe and incorrect’ cancer treatment (Ross & Swetlitz, 2018). The promise of safer driving, a fraudless world or better healthcare come with a major risk of failure that threatens human’s safety and rights. Advocates of AI like Calum Chase argue that, in the case of self-driving cars, it will kill a few people, but still less than humans would do (Chase, 2017). In the case of the international students deported in error, the algorithm’s accuracy was only 60% which led them to lose homes, jobs and futures. Knowing that no algorithm can be correct 100% of the time, an important question remains for society: how many machine errors are acceptable when they can ruin human lives? (Sonnad, 2018).
From assisting human decisions to threatening our autonomy
While automated processes are nothing new, the adaptive nature of AI is dramatically taking automation capacities to a whole new level bringing more efficiencies for businesses and workers (Daugherty and Wilson, 2018, p.5). However, as AI is taking off more and more tasks from humans’ hands, it can also have a damaging effect on people’s skills and their ability to use them when they need to take over when machines fail, leading to human errors. Researchers Raja Parasuraman and Dietrich H. Manzey explain this seeming paradox as they found that complacency was one of the factors of human error when interacting with automated decision support systems like plane autopilots (2010). Indeed, automation complacency occurs when trusting too much assistive tool, allowing our attention to drift. Thus, it weakens our awareness of the world around us, and our attentiveness as we become disengaged from our work, leading to mistakes. Nicholas Carr, author of “The Glass Cage, who needs humans anyway?”, stresses a deeper issue: when using highly sophisticated tools making life easier, people turn from actors to observers, which inhibits the development of expertise (2013). He further explains that, when people have a small role in a task and end up functioning as mere monitors, they become passive watchers of screens which is a job that humans are especially not good at with their notoriously wandering minds (Carr, 2013). When, for example, systems like Google Map assist drivers by providing itineraries, it also changes the way people drive as they rely on the application to give them directions. They do not think any more about what routes they should take and can get lost when the technology fails or misbehaves. This phenomenon is called “the automation paradox”: “the more reliant we become on technology, the less prepared we are to take control in the exceptional cases when the technology fails” (Guszcza, 2018). As no autonomous system can be correct all the time and that they are increasingly assisting people’s work and life, how can we make sure people’s skills will be ready when we need them the most?
From bias to discrimination & inequality
“At their best, AI and algorithmic decision-support systems can be used to augment human judgement and reduce both conscious and unconscious biases” (AI Now Institute, n.d.). However, there is a growing consensus that the way AI systems are designed, along with the data used to train algorithms, perpetuate and amplify the same biases already present in our culture, leading to even more discrimination (Whittaker et al., 2018). Indeed, algorithms can be racist, sexist, and reflect other structural inequalities found in our society (Matsakis, 2018). This recognition comes in the wake of a string of examples, including evidence of bias in risk assessment for sentencing (Angwin et al., 2016), healthcare benefits (Lechter, 2018), hiring process (Goodman, 2018) or visa fraud detection (Sonnad, 2018). In the sentencing case, ProPublica, a non-profit newsroom that produces investigative journalism, found that the COMPASS presented significant racial disparities, as the algorithm was “particularly likely to falsely flagged black defendants as future criminals at twice the rate as white defendants”. Furthermore, “white defendants were mislabeled as a low risk more often than black defendants” (Angwin et al., 2016). The truth is that data can be biased, as they are often incomplete, skewed or drawn from non-representative samples, and developers can encode the bias, consciously or unconsciously, when programming the machine learning models (Campolo et al., 2017). It is especially problematic when automated decision systems are used in the public sector and complex social systems as it may disproportionately affect disadvantaged people and reinforce existing inequalities, regardless of the intentions of the developers (United Nations, 2018).
Black-box algorithms in automated decision-making threatening human rights
Algorithms are often compared to black-boxes as “current (deep-learning) mechanisms are unable to link decisions to inputs meaningfully, and therefore cannot explain their acts in ways that we can understand” (Dignum, 2017). It means that algorithms became so inscrutable that even their creators cannot explain how they work, leaving the people affected by their decisions completely in the dark. In the case of COMPASS, the crime-predicting algorithm previously mentioned, defendants are not able to question the process by which their score was calculated (Martin, 2018). It implies that, for example, a defendant might have been classified as ‘high risk’, while he might not be, and cannot ask how this result has been estimated. In health care, a Medicaid program suddenly cut hours of caretaker for people with heavy disabilities without any valid reason to do so, and both people affected by the decision and assessors using the tool were unable to understand why (Lechter, 2018). Many similar examples of algorithmic tools used by states to inform decisions upended people’s lives in drastic ways without any explanation, and even without giving them the means of challenging the process of how their results were determined. Furthermore, often states decline to disclose the formula claiming that the math used by the algorithm is a trade secret. “For risk assessment algorithms, the existence of the algorithm, the factors considered, and the weight given to each are kept secret by claiming the algorithm is proprietary (Smith, 2016; Wexler, 2017). This situation is particularly worrying as these opaque automated systems, known as not performing flawlessly, are increasingly adopted in life-altering decisions, undercutting individuals’ rights to due process and dignity.
Next articles
If you are interested in this topic, you can read the other articles in this series on the below links:
- Part 1 — Introducing ‘Designing responsibly with AI’
- Part 2 — AI today: definition, what is AI for? risks and unexpected consequences on society (you’re here 👈)
- Part 3 — Ethical dilemmas of AI: fairness, transparency, human-machine collaboration, trust, accountability & morality
- Part 4 — Design of Responsible AI: imbuing values into autonomous systems
more to come, stay tuned 📺
Bibliography
The full bibliography is available here.
Before you go
Clap 👏 if you enjoyed this article to help me raise awareness on the topic, so others can find it too
Comment 💬 if you have a question you’d like to ask me
Follow me 👇 on Medium to read the next articles of this series ‘Designing responsibly with AI’, and on Twitter @marion_bayle