UX Collective

We believe designers are thinkers as much as they are makers. https://linktr.ee/uxc

Follow publication

You're unable to read via this Friend Link since it's expired. Learn more

Member-only story

I made 2,000 sketches while I took a nap: here’s how AI did my design homework for me

Matthew Askari
UX Collective
Published in
4 min readNov 8, 2022
A grid of chairs generated using Stable Diffusion.
A random selection of 18 of the thousands of sketches I created with Stable Diffusion.

I’ve never been gifted at visualizing 3D objects in my head — a rather unfortunate trait for an Industrial Design college student. As a freshman, I would stay up late sketching as many new concepts as I could. The end result? 30–50 mediocre sketches.

Fast forward to senior year and now I can take a nap and wake up to 2,000 sketches that I could never even picture in my head.

How? By using Stable Diffusion! For this batch of images, I had to do 5 simple things:

  1. Download Stable Diffusion’s model
  2. Run a Google Colab notebook
  3. Set my image settings
  4. Write out a batch of prompts and how many images I want for each prompt (this time I did 8 prompts with 200–300 results each)
  5. Fall asleep
  6. Wake up to my thousands of “sketches”

And just like that, I experienced the most creative, ingenious concept exploration session of my life.

What is Stable Diffusion?

Stable Diffusion is a text-to-image A.I. model. You might have heard of others, like DALL·E 2 or Midjourney. They all operate in a similar way.

A grid of 4 chairs generated using Dall-E 2.
More sketches that were created for my homework, this time in DALL·E 2

The input: A prompt you type. Could be one word, a string of words, or a sentence(s).

The output: An image that matches your prompt (usually a square 512x512 image; the majority of models are trained on this resolution).

What makes Stable Diffusion different?

Stable Diffusion is open source. This means the model has been released to the public, anyone can download and run it on their computer. This makes it the best suited model (vs DALL·E 2 & Midjourney) for large batch image generation because not only is it free, it’s also the only model that allows you to produce more than four images at a time.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Written by Matthew Askari

Recent graduate, designer, A.I. enthusiast. If you would like the free link to any of my paywalled articles please reach out!

Write a response