You're unable to read via this Friend Link since it's expired. Learn more
Member-only story
I made 2,000 sketches while I took a nap: here’s how AI did my design homework for me
How I slept through my most productive brainstorming session ever.

I’ve never been gifted at visualizing 3D objects in my head — a rather unfortunate trait for an Industrial Design college student. As a freshman, I would stay up late sketching as many new concepts as I could. The end result? 30–50 mediocre sketches.
Fast forward to senior year and now I can take a nap and wake up to 2,000 sketches that I could never even picture in my head.
How? By using Stable Diffusion! For this batch of images, I had to do 5 simple things:
- Download Stable Diffusion’s model
- Run a Google Colab notebook
- Set my image settings
- Write out a batch of prompts and how many images I want for each prompt (this time I did 8 prompts with 200–300 results each)
- Fall asleep
- Wake up to my thousands of “sketches”
And just like that, I experienced the most creative, ingenious concept exploration session of my life.
What is Stable Diffusion?
Stable Diffusion is a text-to-image A.I. model. You might have heard of others, like DALL·E 2 or Midjourney. They all operate in a similar way.

The input: A prompt you type. Could be one word, a string of words, or a sentence(s).
The output: An image that matches your prompt (usually a square 512x512 image; the majority of models are trained on this resolution).
What makes Stable Diffusion different?
Stable Diffusion is open source. This means the model has been released to the public, anyone can download and run it on their computer. This makes it the best suited model (vs DALL·E 2 & Midjourney) for large batch image generation because not only is it free, it’s also the only model that allows you to produce more than four images at a time.