The state of creative AI: will designers get superpowers?
Aside from stunning demos, generative AI is gradually entering creative production and simplifying time-consuming tasks. Over time, creatives and designers will gain superpowers, changing their jobs forever.

Disruptive innovation always starts with simple applications at the bottom of a market and moves up until it disrupts an industry. Over the past years we’ve seen Artificial Intelligence (AI) enter the domain of creative and graphic production. We can expect the impact of creative AI — or generative AI — to grow in the next few years as technology becomes more powerful.
Rather than cover every AI development in 2021, I will discuss examples that illustrate the current state of AI and how it may impact the jobs of creatives and graphic designers.
Experimental: Turning Ideas Into Images
Create photorealistic images from text
Wouldn’t it be awesome if AI could synthesize a photorealistic image based on your description? OpenAI impressed in January 2021 with DALL-E, an AI model that creates images from text input.
DALL-E can create anthropomorphized versions of animals and objects, combine unrelated concepts in plausible ways, use perspective, render text, and use different styles ranging from photorealistic to cartoons and paintings.

The examples are mind-boggling. Combining two unrelated concepts or applying an art style to an object is a typical first-year assignment to unlock creative thinking in product design students.
When I showed the “armchair in the shape of an avocado” to product designer friends, they were not amused.
The idea that AI could create such an object in seconds while students would work on it for weeks, made them feel uncomfortable.

OpenAI doesn’t provide a live demo, we only get to see a number of images. OpenAI, however, ensures that these images were not cherry-picked by humans. Another AI model called CLIP picked the best images.

Nvidia’s GauGan2 (Nov ’21) does provide a live demo. GauGan2 can create landscape images based on text input or sketches. The AI model was trained on 10 million landscapes, but theoretically the same AI model could be trained on other types of images too. Although impressive, the technology is still experimental: it is limited to landscapes and the results are a bit too random to be useful.
Create drawings from text
Ah, drawing without having to put in all that hard work, it can be done. A team of AI researchers created a proof of concept called CLIPDraw (June ‘21). CLIPDraw is an algorithm that synthesizes new drawings based on text input.

If your 4-year old would draw this, you’d be impressed, but we expect better from AI. More sketches as training data should lead to better results though.
Another team of researchers built on this model and created StyleCLIPDraw (Sept ‘21), an AI that generates drawings taking into account both a description and drawing style.
I took a shot at Henry Taylor’s painting “She might have loved those summer days but later she cried out!” with StyleCLIPDraw. I provided an image of the painting and the text prompt: “Man in a swimsuit walking a dog at a beach”. On the left you see the original painting, on the right the generated image.

The result is obviously not a Henry Taylor painting that could be sold for $150.000. Actually, I’m taking that back, crazier things have been sold as NFTs for more money.
Create photorealistic images from sketches
Nvidia’s GauGan2, not only takes text prompts, it also turns rough sketches into photorealistic landscapes. You can doodle your vision with different “material brushes” on a so-called segmentation map. In no time, your vision appears as a photorealistic rendering on the other side of the screen.
It’s pretty cool as a demo, but for real-world applications the results are not consistent enough.
For example, by adding a pond to the landscape, the trees might change shape as you can see in the example video.
Researchers are finding ways to control the output better. An image-to-image AI, called PSP (Nov ‘21), combines segmentation maps or sketches with styles and turns them into photorealistic images of faces.

With more environments to choose from and consistent results, one can see how this could speed up the work of visual artists, landscape architects or scenario writers by turning their sketches into concept visuals or story boards. Stepping it up a notch or two, you could turn a story board into a video.
Creating memes
It is only fitting for cyberculture that someone combined multi-million dollar AI research projects to generate memes. Memes — seemingly simple images with a caption in white outlined text in the Impact font — are the epitome of internet culture, rivaled only by TikTok in terms of virality and creativity.
AI could never, could it? Well, artist, inventor, and engineer Robert A. Gonsalves took a shot at it. And he got some good results. His project exemplifies the current dynamic in the field of AI: any piece of research is picked up instantly by others to create new knowledge and new applications.

Tried And Tested: Image Editing With AI
While the previous examples are still experimental, generative AI has entered commercial applications, including photo apps, AR filters, Photoshop and video editing.
The fun department
Since the word “selfie” was officially accepted for use in Scrable in 2014, the world has grown only more obsessed with looking good in selfies.
FaceApp uses AI for photorealistic selfie tweaks. Millions of people have downloaded the app to look better, younger, older, grow facial hair, and so on with dozens of filters. And millions of people have used Snapchat’s Cartoon 3D lens to share on TikTok because who doesn’t want to look like a Disney character.

Deep Nostalgia, offered by online genealogy company MyHeritage (Feb ’21), uses AI to create the effect that a still photo is moving. Needless to say that Twitter was having a field day, trying to come up with the creepiest animation.

The productivity department
Adobe Photoshop released their AI-powered “neural filters” in October ’20. The filters can enhance portraits, enlarge images without quality loss, colorize black and white images, change the facial expression or age of the model in the image, etc.

Some filters are gimmicks, some still require manual clean up, but others can be a real time-saver, freeing you from tedious tasks and giving you more time for experimentation and creative freedom.
Masking images — separating foreground and background objects — is one of the most time-consuming tasks in photo editing. The latest Photoshop release of October ’21 contains an AI-powered “Hover Auto-Masking” feature: hover over an image, and it is masked automatically.

Back to the research department
Researchers from NVIDIA published a method called EditGan (Nov ‘21), for high-quality image editing based on segmentation masks. For example the user takes a brush representing headlights, paints big headlines on the segmentation mask of the car, and the model renders a photorealistic car with big headlights. The method could be used on any object, face or animal.

Although this method is still in the lab, just think of how many hours of tedious editing it would eliminate if it achieved high resolutions.
Will Creatives And Graphic Designers Get Super-Powers?
The year 2021 was big for generative AI, with lots of experimentation with text-to-image generation. OpenAI’s Dall-E blew our minds. Especially because it generates realistic-looking phantasy images, while most people just assumed that phantasy is the stuff of humans. I don’t know when this would happen — and many creatives won’t like the idea — but I can certainly see a business case for AI-assisted idea generation.
Next to the spectacular demos, we see generative AI gradually entering creative tools, like Photoshop, making tedious and time-consuming tasks easier. The results may not be perfect yet, but that’s a matter of time. This will give super-powers to creatives and designers and ultimately change their jobs.
In the best case, these super-powers will give creatives, designers and photo editors more time to be creative and less time fiddling with tools. However, if your job is based on your ability to work with basic graphic tools, you may want to seek a more creative or high-end production job.
In addition, creative agency management should scrutinize the hourly-rate business model.
This article is one in a series about the potential impact of generative AI on the creative industry. In this series I evaluate a variety of creative AI technologies to see how close they are to impacting the lives of creative workers. The series includes:
- Will Creatives And Graphic Designers Get Super-Powers (this article)
- Will Video Producers And Editors Get Super-Powers
- Will Copywriters Get Super-Powers
- Will Digital Marketers Get Super-Powers (upcoming)