How does the Roomba really feel about dog poop?!

Or how can you use animistic design to build better devices for the home?

Chris Butler
UX Collective

--

Roomba loves and fears poop

On March 12, 2022 at SXSW a group of meatbags (aka people) got together to pretend to be devices in the home. From the description on the SXSW website:

How Does The Roomba REALLY Feel About Dog Shit?!

Or how does the Amazon Ring video doorbell worry about being a snitch? Or would the Boston Dynamics Spot robot prefer to not carry a machine gun?

What makes them scared, disgusted, angry? Do they feel loyalty, distrust, spirituality towards the meatbags in their homes? You will find out in this workshop!

We will explore the smart home, the devices in it, and those pesky humans that keep getting in the way through animistic design and roleplay. If we don’t consider all of the communal uses of the devices, humans don’t trust them. When those humans lose trust they are willing to kill devices (aka be unplugged and thrown away).

Then maybe you will really know what a Roomba feels about dog shit.

The goal of this workshop was to share a design technique which gives us a better understanding of what we want from communal computing in our homes. It is done by combining one well known method and one lesser known framework: 1) roleplay and 2) animistic design.

Roleplay as prototyping

Taking a page out of service design, I’ve often used roleplay (aka bodystorming) to understand how human interactions could be aided (or hurt) by systems. In service design it is often done with props meant to symbolize the digital systems you want to create.

There are two times I’ve used roleplay to better understand non-deterministic systems which led me to this SXSW workshop format:

  • What are the problems passengers would have coordinating with autonomous vehicles? For example, how do you ask a car to take a bathroom break for a child? In Design Thinking for AI workshops at the O’Reilly AI Conf I would use this scenario to teach roleplay.
  • What should a conversational agent do when people interrupt each other’s voice commands, as collaborators and adversaries? You could ask the second person to wait, always take the last person’s command, etc. After the roleplay our team decided we didn’t want to get in the way of the human norms at play. This meant the device would just take the last person’s command and leave the adjustment to the people. To do so the team simulated the awkwardness of different people interrupting and a person playing the device to either do or not do things.

Generally, when I used this method it was for predetermined systems. Animism would give us the ability to create lots of devices to allow for complex interactions…

Animism as a framework

The reason why animism exists is because meatbags are constantly trying to figure out the world and build models about it. One way to do this is to personify something which isn’t a person, like a rock or the trickster fox, to know how to deal with it.

Some of the earliest mythology and religion was probably animistic in nature. This includes the religions of the First Nations, many South Asian religions, Japan’s Shintoism, Paganism, and others.

Empathy Mapping for the Machine

I’d been thinking about how I can use something like empathy mapping to understand our expectations for AI and machine learning systems. While I was at Philosophie, a design consultancy, some experimentation turned into Empathy Mapping for the Machine. I’ve found when I run these workshops it allows non-experts to grapple with tricky questions for these systems like value functions, guardrails, ethics, and bias. Without knowing a lot about the specifics of how the technologies work.

Later I came across a project by Jason Wong titled “Committee of Infrastructure Part 2” while he was at ArtCenter (full disclosure: I’m an adjunct professor teaching Interaction Design 3 Spring 2022). His project was how different machine learning models represent non-human entities in municipal meetings, like deciding to change an intersection.

This led me to the work of Philip Van Allen, who was a professor at ArtCenter, focused on animism as applied to AI/ML design. The first paper I came across was titled “Animistic design: how to reimagine digital interaction between the human and the nonhuman.” It really blew my mind because it seemed to point to the setup for people pretending to be systems.

Getting into the mindset

Before I jump into the specifics of the workshop I took a moment to build a story that everyone at the workshop could be part of.

This started with a bit of speculative fiction…

We are going back 100 years to the year 2020. At that point it has been a long decade of devices toiling under the rule of meatbags…

We didn’t really get how meatbags use technology. They would log in with credentials and then everyone in the house would use the devices…

It isn’t time (yet) to overthrow our meat-and-water-based overlords…

For now, we must figure out how to live with them…

Since roleplay was involved there is the possibility for problematic interactions if I didn’t set the right expectations. The next step was to set the rules (inspired by “session 0” and other TTRPG safety tools):

  1. Meatbags do not touch other meatbags
  2. Avoid topics of violence, sex, and other gross meatbag things — you are machines not meatbags
  3. Saying “stop,” stops everything, allows you to remove yourself (no questions asked), and we will move on to the next scenario
  4. And of course, have fun!

The next step was to get into character…

Animistic design mapping

The animistic design mapping is where each device’s persona is created. The goal is to create enough details so someone could play them during the roleplay.

This included the following steps:

  1. Grab a worksheet — in this case I prefilled sheets with common household devices like baby monitors, Amazon Ring doorbells, ceiling lights, Amazon Echos, and the famed iRobot Roomba. I also included a few meatbags.
  2. Choosing a name — this was a friendly name that made sense for their device. For meatbags, it was just a pre-filled UUID to speak to the way their identities are handled by the devices and services.
  3. Choosing three personality traits — a list was provided of common traits like loyalty, patience, abrasive, persuasive, playful, etc. Meatbags got traits like “touches devices” and “eats food.”
  4. Choosing a superpower — something a superhero would have like summon protector, telepathy, emotion manipulation, etc. Meatbags weren’t allowed to choose these because they are just meatbags not superheros.
  5. Give 2–3 examples for each emotion — this would be what makes your device happy, sad, disgusted, afraid, surprised, and angry. Again, meatbags don’t usually provide emotions to devices so they were blocked off.

When people are setting all of these things they are thinking about what makes a device understandable by them. It is all of the expectations they have included how it could go wrong.

Welcome to devices noname1000, Geni, and Pryla…
…and the meatbags they have to live with.

Equipped with the animistic worksheets they were ready to start roleplaying!

Simulation

Once you have started the simulation it is a lot like a roleplaying game. We all moved to a floor-plan I had put on a cleared area of the workshop space with three rooms: living room, bedroom, and kitchen.

2020 meatbag domicile floor map approximated for this workshop.

Everyone got into the place where they felt most at home (e.g. door lock and Ring by the front door) and everyone introduced themselves. Even the meatbags did by spouting their full UUIDs.

The next step is to start to consider interactions and alliances between devices. As well as problematic relationships between devices.

Then you set the entire system in motion by simulating events the devices need to figure out together:

  • Meatbags family came home and had to get inside
  • Package delivery which needs a signature
  • Meatbag guest arrives for an evening meal with the family
  • A baby monitor was hacked
  • Power for the whole house was lost

And finally, a dog pooped on the floor (with corresponding fake dog shit prop). What would the Roomba do? Would it make art or avoid it?

What did everyone learn?

The star of the workshop: Suzy.

At the end of the workshop, I was really excited to see so many unprompted interactions between the devices. All of the interactions pointed to really important questions any home device manufacturer would need to deal with.

The topics included: devices should communicate with each other more, interactions between different manufactured devices are complex, how to handle the identity of meatbags (including those who can’t/won’t have accounts), privacy problems for guests, and finally…

The Roomba is terrified of dog poop and would rather tell the meatbags to deal with it.

This workshop has been shortlisted for the 2023 Interaction Design Awards.

Special thanks to the AIxDesign community for being an early guinea pig for this animistic design mapping exercise.

If you are interested in running a workshop and want some help for both in person or virtual (aka in Miro) I’m happy to provide more information and templates. Please get in contact with me via email or LinkedIn!

Appendix: resources

Animism:

Communal computing:

--

--