Member-only story
Missing a point: the UX of subtitles
Do you realize how much context we lose when we lose sound? In a world with everything already invented, can designers do more? And can my obsession with Korean content help me with that? Let’s see!

Before even starting with any subtitle improvement, I want to note that all subtitles should follow the readability/comprehension rules, such as in captioning tip sheet made by DCMP. Any suggestions I make here, are not intended to be used in obnoxious amounts. Adding design elements and motion adds to the distraction, and the goal of this article is not to bedazzle subtitles but to start a conversation about the ways designers could improve subtitles.
So why do we need subtitles?
The reasons are:
- Accessibility.
- Translations.
- For situations when sound could not be used.
- As an addition to the sound for better comprehension of the speech.
- To enhance emotion.
With subtitles first thing is accessibility, providing a better experience for people that are hard of hearing or deaf. However, subtitles and captions are not only used for accessibility. Research done by Preply found that “At least 89% of respondents indicated that they’ve used subtitles in the past”. Stagetext survey found that “80 percent of 18–24-year-olds use subtitles some of all the time watching content on any devices”. With such big auditory should we review the usability of the subtitles? With original rules for subtitles in mind let’s think beyond our expectations of subtitles.
Let’s talk accessibility
Great research was done by Oliver Alonzo, Hijungvalentina Shin, and Dingzeyu Li in their work “Beyond Subtitles: Captioning and Visualizing Non-speech Sounds to Improve Accessibility of User-Generated Videos”. This research offers valuable insights into the experiences of deaf and hard of hearing individuals who interact with subtitles. Research highlights how little contextual information is available in current subtitles. Leaving people with a feeling of missing out.