Creativity requires just enough confusion

Using randomness and confusion to reframe problems and their possible solutions

Chris Butler
UX Collective

--

Generated image base on the partical prompt: incorporating elements of randomness such as dice and tarot cards, visually represents the “collabracadabra” concept. It combines teamwork, science, and fun with a hint of unpredictability, creating a dynamic and engaging scene.
“Collabracadabra” by Chat-GPT 4

As a “chaotic good product manager” I find adding little bits of confusion to the work we do is helpful. It isn’t because I find it funny or amusing but because “enough confusion” is what drives innovation and creativity.

I’ve done this with teams by adding a bit of randomness when sketching and to break out of our biases in decision-making. But I’ve found that sometimes there is just too much confusion for someone to deal with. People in high-certainty roles like engineering, sales, and management can balk at using Oblique Strategy or Tarot cards in ideation sessions.

It can feel silly or stupid to them. They might think: “what is this bullshit?” And: “why aren’t we just making the best decision we can with the logic and reasoning we already have?”

What is “just enough” confusion

You should think about innovation or creativity as taking concepts that already exist and using them in new ways, new combinations, and in new domains. They don’t just appear out of nowhere. It requires the person doing the creative act to have the pieces to put them together.

One theory that describes this type of activity is the adjacent possible, a concept introduced by Stuart Kauffman. We see a graph of current combinations, uses, etc. and the creativity takes place in one step beyond. Every step we take opens up more possibilities in the graph of concepts.

A graph. Includes set of current concepts that lead to new concepts that are possible next. Then there are possible concepts after that and so on.
The adjacent possible network of concepts.

You can also use this concept to consider how much someone might be friendly or accepting of a concept. If it is too far of a leap for them, even if it makes sense to you, it won’t be something they will accept. It is just too many steps away.

This is where I think confusion plays a key role. We can use it to allow for previously unthinkable concepts to be allowed because it is a new bridge to them. When we are forced to react to confusion, we are able to reframe what is possible. It is a transformative action for the person.

Those people that balk at these tools are usually being asked to push a bit further than they usually feel comfortable with. But what if we pushed them in ways that were more adjacent to what they currently understood?

Enter a model that can do some amazing things to mash concepts together in a way that still sounds like human language: the LLM.

Oblique Strategies for [x]

Oblique Strategies

Oblique Strategies origin is from Brian Eno and Peter Schmidt. They worked with very creative people who sometimes needed a jolt or provocation to get out of their heads when they were stuck. This is where the cards come from.

The problem is that they can be very oblique. “Courage!” is hard to parse in the context of software sometimes. “Change instrument roles” may not make much sense outside of music. The domain is workable for people used to making further away connections but it could be made easier for those that are not.

One way I’ve started to experiment with this is by creating a prompt for Bard or ChatGPT like the following:

Pretend you are Brian Eno and need to create a new Oblique Strategy deck of cards. The people that need to use the cards aren’t able to understand the obliqueness as much and need something that is more appropriate for their domain. You will create at least 20 provocations that would cause them to reconsider their current framing of a problem or solution. I’d like you to create the custom Oblique Strategies deck for cross-functional teams of product managers, engineers, and designers building a consumer video calling service.

The first five the service offers are (check out the full list of 20 here):

  1. Flip the Interface: Imagine the user interface is the exact opposite of what it is now. How does this change user interaction?
  2. Minimalist Day: Remove one feature that seems essential. How does its absence reshape the service?
  3. Empathy Hour: Spend an hour using your service with the mindset of a technologically challenged user. What insights do you gain?
  4. Random Feature Swap: Swap a feature with a completely unrelated app. What innovative functionality emerges?
  5. Silence Speaks: Design a way for users to communicate without words or video. What new form of interaction is born?

Having worked on something that did video calling, these were pretty good and would definitely cause a good conversation.

When trying to create a “confusing enough” provocation, we can start modifying the prompt for the appropriate domain. We can ask it to be more or less oblique to dial in an appropriate set of provocations.

Synthetic points-of-view

Another workshop I’ve used to get people out of their own heads (and assumptions) is to consider how a famous or historical figure might decide in this circumstance.

This could mean that we could include people that we might look up to or feel we are in alignment with their values. In my case, I might ask what would a meta-disciplinary thinker like John Boyd, Brian Eno, Donald Glover do?

Even with fully synthetic agents, we can learn from them to consider new possibilities. AlphaGo changed the world of Go by offering new ways to think about playing. Once people had access to open-source models so they could study the way that these models made moves they improved the entire field. This is outlined in a paper from 2022: “Superhuman artificial intelligence can improve human decision-making by increasing novelty.”

After the open sourcing of Go models people were able to increase their skill.

This leads to being able to simulate agents that might be the antithesis of who we are. We could have people argue the other side of what we are thinking.

This could even lead to the you that is more/less open, the you that holds more/less diverse viewpoints, and even the anti-you. All of these are interesting provocations, especially in a simulated conversation.

What I believe this eventually leads us to is a set of models that know what we are and are not comfortable with. It can start to generate the right amount of confusion based on our needs and domain we are being asked to consider.

I’ve started to experiment with prompts to achieve these viewpoints but is still a work in progress…

Confusion as the gateway to good discourse

We need confusion to reconsider our current framing of the world. This is something we can do on our own but can be hard to dial in what the right amount is. With new services that can synthesize human language like LLMs we can adjust the confusion to the right amount.

Eventually this will lead to exciting teams of people, other digital versions of them, and AI agents that push the ability to have beneficial discourse. Creating “just enough” confusion in the team will allow us to consider new possibilities for the consistently more complex problems we need to address in our future.

--

--