UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

What makes user interfaces intelligent?

Daniel Buschek
UX Collective
Published in
5 min readMar 14, 2020
Teaser illustration: Old typewriter spitting out papers with abbreviations “AI”, “HCI”, “UI” on them.
Original photo by: https://www.pexels.com/@dom-j-7304

What do we actually mean when we refer to interactive technology as “intelligent”?

To answer this question we conducted a data-driven literature analysis. Here I share the key insights from our paper, relevant for those involved in creating (intelligent) user interfaces. This will give you a communication tool, for example to help you clarify what’s intelligent about your UI/product in discussions among interdisciplinary teams and various stakeholders.

Extracting an emerging perspective on intelligence

First I tell you what we did not do: Trying to come up with another definition of AI, intelligence and so on. Instead, we followed a bottom-up approach: We extracted the emerging perspective on “intelligence” in a research community working at the intersection of Human-Computer Interaction (HCI) and Artificial Intelligence (AI). Concretely, we analysed 25 years worth of research papers from the ACM Intelligent User Interfaces (IUI) conference.

The method we used had two steps — text analysis and manual coding:

First, we used text analysis to extract every single sentence with “intelligen*” ever published at the IUI conference. This spans 25 years worth of research in HCI & AI and leads to the nice number of 1,111 analysed papers. 504 papers at least once mentioned “intelligent” or a grammatical derivative. In total, we extracted 1,804 sentences in this way.

Second, we manually reviewed these sentences regarding three key aspects:

  1. Entities: What is deemed intelligent?
  2. Co-descriptors: How (else) is it characterised?
  3. Actions: What capabilities are attributed to it?

For example, at the first IUI conference in 1993, Woods characterised an intelligent interface as “an autonomous computer agent who mediates between process and practitioner”. This would yield an entity (“agent”), co-descriptor (“autonomous”) and action (“mediate”). See the paper for more details on how we coded the sentences in this way.

What is intelligent in UIs? What does it do for users?

Let’s dive into the results. This figure shows the top ten entities, co-descriptors, and actions that IUI researchers most frequently refer to when describing something as “intelligent”:

Bar char showing the top ten entities, co-descriptors and actions.
The top ten intelligent entities/co-decriptors/actions, ranked by number of papers in which they occur.

We now take a look at three findings that characterise the emerging perspective on intelligence in user interfaces here.

Finding 1: Intelligent UIs assist the user

Assisting the user is most often seen as the key action that the intelligent entity performs.

Further common actions are creating, detecting, adapting, recommending, interacting, understanding. Such varying top actions suggest that these interactive systems and UIs overall address tasks that lend themselves to mixed-initiative: Both user and AI are actively involved in various roles.

Finding 2: Intelligent UIs adapt to the user and automate tasks

Adaptation, automation, and interaction are the most common aspects that researchers highlight when describing something as intelligent. They are also present across all years. We thus see them as the most coherent core of the emerging understanding of intelligence among experts at this intersection of HCI & AI.

Finding 3: Different UI concepts are intelligent in different ways

Interfaces, systems, agents, and assistants are the most common entities to which researchers attribute intelligence. This reveals that the IUI community has broadly embraced both intelligence for more traditional UIs (e.g. GUIs), as well as for agents, such as chatbots and voice assistants.

Heatmap showing the relative occurrence of co-descriptors per entity.
Relative occurrence of co-descriptors per entity. These patterns suggest that the experts associated different aspects with intelligence in UIs depending on the interaction concept / entity they thought about.

There are interesting differences, as shown in the figure: Intelligent interfaces and systems tend to be described relatively more as adaptive and interactive, compared to agents and assistants. These, in turn, tend to be described relatively more as autonomous. Assistants are also often labelled as personal.

Thus, the IUI experts’ implied assumptions and goals around intelligence vary depending on whether the technology is seen, for example, as a UI, system, tool, or agent.

Trend: Growing diversity of what is “intelligent”

We also found a trend over time: The figure shows a growing diversity in what IUI researchers refer to as intelligent and how they describe it. The main actions of intelligent entities are present already from early on.

Line plot showing the development of the total number of distinct entities, co-descriptors, and actions over time.
Development of total number of distinct entities, co-descriptors, and actions over time. Read: At each point, Y distinct codes have appeared until and including year X.

Takeaways

We next discuss two takeaways: 1) The emerging implicit perspective on intelligence in UIs, and 2) how we all can engage more explicitly with our assumptions about intelligence in the technology that we shape and use.

Takeaway 1: Perspective — Intelligent technology assists humans

Together, these findings on intelligent user interfaces contribute to the wider ongoing discussion about the role of AI in our technology and usage thereof.

This is the perspective on AI that emerges from our analysis:

The analysed research implicitly emphasises using AI in UIs for assisting users. It sees intelligence manifest in adaptation and automation, used interactively.

We can link the perspective emerging here to recent calls for AI to be rendered interactive. Thus, interactive intelligent systems may use AI to improve UIs, or they may enable interactive use of AI through new UIs. In both cases, the goal is to use AI to augment what people can do, and not to replace them.

Takeaway 2: Engaging with perspectives on “intelligence” as UI designers, developers, and researchers

We found very few attempts at explicitly defining intelligence in the analysed papers. Rather, experts at this intersection of HCI & AI seem mostly to rely on an implicit understanding of intelligence.

Our analysis suggests that we can do three things to more explicitly engage with our assumptions and goals around “intelligence” in the technology that we shape and use. We recommend:

  1. State how you construe interaction — in particular, do you regard what you offer to users to be a system/tool or an agent/assistant?
  2. State how your UI relates to the key aspects of adaptation and automation, and how these matter for users during interaction.
  3. Use these aspects to build and communicate a pragmatic working definition of what is intelligent about your UI/system/product.

We consider these recommendations a useful communication tool, for example to clarify discussions in interdisciplinary teams or in presentations to multiple stakeholders. In such contexts, the various people involved may have very different views on what is “intelligent” about the technology.

Here, explicating your perspective in this way should prove very useful. And while our views on intelligence in technology are likely to change over time, today these recommendations allow you to build on the emerging perspective of 25 years of research at the intersection of HCI & AI.

The full analysis and report is available in our paper.

Written by Daniel Buschek

Professor at University of Bayreuth, Germany. Human-computer interaction, intelligent user interfaces, interactive AI.

No responses yet

Write a response