Co-creating with the 🤖

Investigating our new creative relationship with Artificial Intelligence.

Arvind Sanjeev
UX Collective

--

Blurring the lines of human vs machine creativity

Artificial Intelligence is one of the most extended biomimicry projects in the history of scientific research. Just like how the human brain is a piece of hardware/wetware running our mind — the software, machines have GPU/CPU processing units as the hardware and neural networks as the software. Psychologists first instigated this research to help further understand how the brain works and our mind learns. One of the first neural networks that came out in 1957, The Perceptron, was made by an American psychologist — Frank Rosenblatt, who modeled it based on the learnings from probing a frog’s brain.

A researcher using the Perceptron to first train it on a picture of a female subject, and then feeding it back in to have the machine classify the picture as a female.
The Perceptron by Frank Rosenblatt, 1957

Even though the pioneers of Artificial Intelligence were visionaries who believed that AI could change the world, most were doubtful whether machines would think like us or be “creative”.

A screenshot of professor Jerome S Bruner from the documentary: The Thinking Machine.
Excerpt from ‘The Thinking Machine’, MIT 1960

“Where my doubt comes in is whether we shall be able to produce machines capable of creative thinking. I doubt very much that any artificial information processing system will ever be able to do this kind of inventive thing…I rather doubt if we will ever be able to do this in our lifetime.”
— Jerome S Bruner, Harvard 1960

However, we crossed this tipping point some time ago; this article takes you through this journey of how AI is disrupting creativity and augmenting our prototyping processes via examples and projects as part of our course at CIID.

A snippet from the movie Metropolis showing the AI: Maria being brought to life.
Metropolis 1927 by Fritz Lang - The first AI was featured in a movie

Co-creating with the 🤖 was a one-week course I taught with Matt Visco at the Copenhagen Institute of Interaction Design. This week, we investigated the new creative relationship we are forming with machines through playful experiments powered by Artificial Intelligence while being mindful of some of the unintended consequences it is creating in our world.

Listed below are some of the different creative disciplines where storytellers, musicians, and artists have been using AI to co-create new work.

Co-creating *stories* with the 🤖

Storytelling has historically been a skill possessed only by humans, some of them fictional, whereas others inspired by our own human experiences. With the advent of GPT (Generative Pretrained Transformers), machines can now tell stories too after being trained on a corpus of human-written stories.

AI.dungeon
AI.dungeon is a platform where you can co-write stories with the machine. Start by choosing a world where you want to create your narrative and then enter prompts to see how the machine completes it based on your prompts:

AI dungeon — co-create stories with AI

Fable virtual beings
Fable has reimagined storytelling experiences told through video. GPT brings the animated characters from the story to life and allows viewers to converse with them in real-time. Video storytelling can now emerge from just a one-way dialogue into an engaging experience where you can interact with the fictional characters. The short video below shows a real-time conversation with one of their characters, Lucy:

Virtual Beings from Fable Studio

Co-creating *music* with the 🤖

What makes AI models so good is their incredible ability to reveal hidden patterns and reproduce them. Music is one of the best ways for AI to shine since it can easily train itself on choruses, progressions, and other repeating patterns to produce a cohesive track.

Holly Herndon
Holly is a computer scientist turned music artist who actively collaborates with AI in her compositions. One of her popular tracks, Godmother, was made in collaboration with Spawn, an AI model she trained on percussion tracks.

One of her most notable explorations is Holly Plus, a digital twin of herself that she made using an AI model trained with her vocals. She released this public model, allowing anyone to upload songs into the platform, which then outputs the same music with Holly’s voice.

Another example is a popular fad in YouTube currently. After being trained on their music, AI models are used to create new original tracks from yesteryear artists like Beetles, Frank Sinatra, Nirvana, etc. This is a “new” Nirvana song made by an AI:

Jukebox.openai is another platform from openAI that hosts machine-made music inspired by popular artists like Kendrick Lamar, Taylor Swift, Beyonce, to name a few. Pick your artist to listen to their new compositions imagined by an AI.

A screenshot of the platform: jukebox.openai showing a new track generated by an AI accompanied with lyrics, inspired the artist: Kanye West.
Jukebox.openai is an AI-powered platform from openAI that generates new tracks from artists

Co-creating *X* with the 🤖

The above examples show how machines can produce creative content in mediums like writing, movies, and music, but the same AI models can also extend to many other contexts.

A preview of the AI powered platform: DALL-E which shows images being generated based on a text prompt given by the user.
DALL-E is a platform that can create images based on prompts that you give it

Researchers are already envisioning a future where AI models can produce anything we want by simply giving them a prompt for it. We might no longer have to learn the machine’s language, but instead, they get better at understanding our needs.

Left: a website layout generator and Right: a code completion model

What is creativity?

Humans have a long history and relationship with tools, from prehistoric times when tools unlocked the ability to hunt and gather to modern times where digital tools allow creative ideas to take new forms. Beyond extending our capabilities, tools also inform our creative thought process — give a person a hammer, and all they see are nails. But now, with AI, these tools not only inform our creative process but can be creative themselves.

Left: BeepleGAN — a GAN trained on Beeple’s artworks through RunwayML & Right: Alternate Grizzle from Botto — autonomous AI artist from by Mario Klingemann

Humans have now taken over the role of a good teacher who shares good quality examples for training the AI student. And once the model is ready, we switch over to the role of a curator, who then filters and selects the best output from the machine.

Two different frameworks are shown for traditional tools and creative AI tools. In the traditional tools, an artist has the idea and uses tools to implement it. While for creative AI tools, an artist feeds a collection of content to the AI and the AI creates ideas for the artists to choose from.

This begs to question what is “creativity”?

Definition for creativity from the Oxford dictionary: the use of imagination or original ideas to create something; inventiveness.

Just like how a human artist gets inspired by the works of other artists or their own experiences, aren’t machines also imagining new ideas after being fed a dataset of inspirational works?

Prototyping with the 🤖

Machines are not only forming a new creative relationship with us; they are also supercharging our existing relationship with tools. Machine Learning has become a powerful tool for designers, artists, and creators to rapidly prototype experiences without requiring a heavy technical background. Unlike traditional programming, where you need to spend most of your time coding, machine learning allows creators to train new models through examples in a few minutes, thereby freeing up their time to focus on the core experience itself. In my previous article on this topic, I have described in depth how machine learning is supercharging prototyping and the tools that you can use for rapid prototyping:

During our week at CIID, we introduced some of the latest prototyping tools in machine learning to the students: ml5, teachable machine, runwayML, to name a few. Soon after which we also discussed the ethical implications and unintended consequences that AI models are creating in our world. Equipped with these tools and knowledge, the students were asked to explore their new creative relationship with the machines by prototyping a COVID-friendly installation that allows people to have an immersive experience.

The projects documented below are some of the work that came out of this short week of exploration:

1. TIQUICIA GRÁFICA
By Karla Ramona Umaña Hernández, Jennifer Cob, Jose Avila

Tiquicia Gráfica is a surreal exploration of the ever-changing graphical identity of Costa Rica. The team collaborated with IdenticaCR: an archive of handmade Costa Rican signs and posters. More than 7000 images from IdenticaCR’s Instagram profile were downloaded to become the core database for the project.

The exhibition space for Tiquicia Gráfica

The images were sorted into four categories: faces, characters, signs, and food. After that, four different GAN models were trained with each category, allowing the team to explore the patterns and aspects of these graphic expressions in detail.
The outcome was an interactive exhibition involving a gallery with more than 100 printouts and an interactive projection mapping that allowed users to blend in with the new graphical concept.

2. STUDIO RUMBA
By Joshua Tercero, Mia Pond

Studio Rumba is a Machine Learning-based dance teacher that helps users do the salsa and find their groove. Designed for beginners, Studio Rumba breaks down salsa into simple steps to help users learn the basics.

The Studio Rumba dance assistant’s visual guide

The project uses the PoseNet machine learning framework to track the user’s movements. The ML “teacher” then instructs them to move their right and left feet in sync with the music through a visual guide.

3. FUTURE-BOT
By Wolfe Erikson, Deepshikha Kapoor

The Future-Bot AI tarot reading experience

Tarot is an ancient tradition used by skilled practitioners as a tool for divination. During a reading, the tarot-reader acquaint themselves with the energy of the person they are reading and draws cards for a spread.
Using the person’s energy paired with the symbology of the card’s images, the practitioner can deliver messages to the person. The team was curious how they might take a tradition rooted in antiquity like tarot and apply it in a new way with technology.

Anyone can come to sit at Future-Bot and wave their hand over the area in their life they would like some clarity — Career or Love. Future Bot then uses machine learning to map their facial expression. The facial expression would fall into one of three categories depending on the values, and whichever category it falls into would deliver a corresponding tarot card to them.

4. STRETCH!
By Aniruddh Ravipati, Priscilla Garita, Sofía Elizondo

Stretch! is an interactive, playful installation that can live discretely in any street, corridor, or empty wall. It’s a way to provide a moment of respite for adults and kids alike.

Some previews of the installation

With the help of coco-ssd, an object detection model, it identifies people in front of a camera and translates them into cute stretchable rectangles! These rectangle avatars react to the people in front of them and invite them to move their bodies in different ways to trigger sounds and visual reactions.

5. TRU-EMOTION
By Manali Mohanty, Pablo Corella, Victoria Portuguez

TRU-EMOTION is an emotionally reactive immersive installation. It is an invitation to explore the many facets of human emotions, abstracted and expressed in a physical space.

The exhibition space of Tru-Emotion

Using facial emotion tracking in P5, six different emotions — happy, sad, surprised, angry, fearful, disgusted, and neutral get interpreted into unique colors and patterns. The visuals project on a labyrinth of translucent panels suspended mid-air in an airy, darkened space. The panels act as a screen for the projection by creating a three-dimensional canvas that allows light to pass through. Accompanying music completes the experience.

Co-creating a mindful future with the 🤖

A photograph showing Garry Kasparov being frustrated while playing against Deep Blue in 1997.
Garry Kasparov playing against Deep Blue in 1997. Image: STAN HONDA/AFP via Getty Images

When Garry Kasparov, the world chess champion, was beaten by the IBM chess program — Deep Blue in 1997, he realized that rather than playing against the machine, combining forces with it, will create a much more powerful entity that is stronger than just a machine or a human on its own.

A centaur is shown galloping.
Image source: tenor.com

In this new supercharged creative partnership, humans are merging their creativity, empathy, and intuition with the machine’s brute force ability to compute and process to become powerful human-machine centaurs. However, unlocking this new partnership also comes with a lot of responsibility for using them ethically.

In one instance, the latest GPT-3 model has made it almost impossible to tell apart AI-written content from human authored ones. Media publications like LA Times, etc., have been using AI models for a while now to generate articles in a very short turnaround time. At the same time, it has also become effortless for anyone to create new propaganda or propagate new conspiracy theories by just letting a trained GPT-3 model publish thousands of articles on the internet.

The Turing test that could tell machines apart from humans is no longer valid, and now more than ever, we need new methods for telling apart content created by AI bots and flagging them.

The Intersection, a short film from Superflux that extrapolates unintended consequences from some of the current emerging technologies into the future

The same tools designed to supercharge creativity are also leading to unintended consequences that impact our democracy, privacy, and women’s safety. Deep fakes, deep nudes, fake news, and models based on biased data are some direct examples of this. However, by creating awareness around these consequences and democratizing the knowledge around these traditional black boxes, we can make opportunities to engage our communities with AI and spark healthy debates on its applications, thereby co-creating a mindful future with the machines.

I plan to host more of these exploratory workshops in the future; if you are interested in participating, please sign up here.

--

--