Member-only story
Midjourney vs Dalle: what’s the best AI for UI Design?
(or “which one is going to steal our jobs first?”)

Can AI really do UI work?
Yes, emerging technology known as text-to-image (T2I) AI can create images from words. This means an AI can attempt to produce UI work if we prompt it properly. And at the very least, it can produce inspiration for our work.
What about UX work?
Unfortunately, T2I AI models are image generation models. They can’t serve as a way to produce good UX work. Although the outputs might have good UX principles (since it is based on existing images) it’s a dice roll. Ultimately, it's always best to use your own knowledge of UX principles regarding screen design.
AI can help with UX work though. ChatGPT can provide knowledge and feedback about user experience. I will cover how to use ChatGPT for UX design in a future article.
What AI can do this?
The two T2I offerings I’ll be covering are Midjourney and Dalle2. They all act on the same premise but were developed by different groups.
Midjourney has one of the best models right now and a subscription-based price plan. While Dalle2 is created by OpenAI a leader in this space and the backing of Microsoft. Dalle2 has a credit system that essentially allows you to pay per prompt. Let’s get comparing!
How should I prompt it?
Experimentation is key with T2I technology (that’s why I pay for Midjourney but not Dalle), so there is no solved, optimal way to prompt. So although I can’t give you a clear answer, I can pass along some of what I have learned.
There are two ways you can prompt:
- Sentence(s) format. Similar to how you would describe it to a person. Example: The user interface of a therapy app made in Figma with a flat vector style, trending on Dribbble.
- Tokenized format. This is listing out the elements you want in your image. Example: therapy app, flat vector, Figma, dribbble, user interface
Generally, the tokenized format works better as it cuts out unnecessary words for the model to process. Although…