UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

The dangerous thing AI designers can do

In the age of AI, design goes way beyond craft

Elaine Lu
UX Collective
Published in
6 min readFeb 24, 2025

by Author
by Author

Lately been hearing more about the designers’ identity crisis, and fitting in when the game has changed. This is actually accelerated by the fact AI is not only changing design, but the way new companies are built.

In the age of AI, designers’ scope will increase far beyond its craft. The most dangerous thing a designer can do in the age of AI is “just design.” Let me explain.

  1. AI’s output quality
  2. Good instinct from bad ideas
  3. Designing AI products
  4. Model testing & prototyping
  5. Smaller companies, fewer layers

AI’s output quality

Designers will need mastery of both designing for AI’s unpredictable outputs, and shaping user experiences that guide and moderate user inputs into the system.

By now, we can reframe AI’s capability uncertainty to AI’s output quality variability. Today, most people are generally familiar with ChatGPT, Claude, Grok, Mid Journey etc. and know what they can do. But the biggest challenge turning these capabilities into actually useful products is their variable output quality.

For any application using AI, consumer or enterprise, the AI needs to be accurate and fast. Fast can be achieved with smaller, more capable models and algo choices, which all are perpetually improving. But accuracy is more difficult to judge, and depend on having a human domain expert in the loop to evaluate AI’s performance for task XYZ, based on prompt ABC.

(1) For each task, the required accuracy (for AI to be useful for people) will vary. Figma’s AI “make designs” or “first draft” are more useful to a product manager than senior designer who can mock up something better in minutes. The senior designer will desire AI to do different things for them, for AI to be useful.

(2) For each prompt into ChatGPT, some keywords will work better than others for executing the task. Same applies for image, models, voice models, etc. Keywords & user actions are then translated to model calls. This is prompt engineering, which is an art and science.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

No responses yet

Write a response