The characters on our favorite television programs are just like us: they come home from work and stream their favorite TV shows and YouTube videos. But it’s hard for me to recall any programs that showed actors using captioned media. While the sounds emantating from their screens may be captioned for us, the sounds are not captioned for them.
Even as accessibility advocates make the case that captioning is a centerpiece of universal design, captioning doesn’t have a very high public profile . Yes, people tweet and write about the value of captioning every day, but we don’t have many opportunities to engage with captioned media in the public sphere. Only a handful of cities in the U.S. require captioning on all public screens (bars, waiting rooms, etc.). Outside of these cities — Portland (Oregon), Ann Arbor, Rochester, Alburquerque, Seattle, San Francisco — we are likely to encounter public captioning only in airports and select bars and restaurants. At the movie theater, good luck finding an open captioned showing in your city.
When captions are enabled on the screens inside the fictional programs we watch, they don’t necessarily make these programs more accessible to us. But they do make captioning itself more visible. Captioning is normalized when it becomes something to be expected no matter where we encounter it. Producers also model more inclusive worlds (and help to counter the public’s resistance to open captioning) when they depict characters on TV who consume captioned programming.
I’ve written about this issue before. In a 2018 article on “Designing Captions,” I used the term “meta captioning” to refer to the “process of hard coding captions onto the screens displayed inside the screens we are watching.” Meta captioning puts captions on every screen, even the screens that characters aren’t paying attention to and viewers don’t have enough time to read. In other words, meta captions contribute to a scene’s inclusive ambience even though they are not always intended to be read like traditional speech captions.
I’m always a bit disappointed when I see people on TV interacting with uncaptioned media, despite the fact that our fictional worlds are assumed to be populated entirely — with few exceptions — by hearing people who don’t need captions (presumably). My first inclination is to re-caption these scenes by using a more robust captioning format or burning meta captions onto the digital screens in these fictional worlds.
Here’s an example from Horse Girl (Netflix 2020) starring Alison Brie as a “sweet misfit with a fondness for crafts, horses and supernatural crime shows [who] finds her increasingly lucid dreams trickling into her waking life.” It’s the supernatural crime show part that’s relevant here. In this scene, Brie is watching her favorite fictional program, Purgatory. We’re watching along with her, which means that the speech sounds on her television screen are intended to be read and understood by us (as opposed to appearing momentarily in the background of a scene as visual decoration). In the original version, Brie is not watching Purgatory with captions:
Let me offer a re-captioned intervention that uses a more versatile captioning format (WebVTT). In the video clip below, I experiment with screen positioning, font family, color, and type size. For color, I’m inspired by both the BBC’s subtitling guidelines for color (white, yellow, cyan, green) and Hulu’s default caption color, which is a warm yellow (approx. red 255, green 204, blue 0). I haven’t tested these captions on multiple browsers or devices, and, honestly, I’m still learning and experimenting with WebVTT’s functionality for styling and positioning captions. If possible, please view the clip in Google Chrome, which supports WebVTT’s color and font styling, unlike Firefox.
Feel free to browse the WebVTT caption file I created for this clip. WebVTT supports simple style markup (identical to CSS styling). I created a different class for each caption style I needed for this clip. Classes included “TVCaptions-offscreen,” “TVCaptions-onscreen,” and “TVCaptions-onscreen-closeup.” Each class was defined in terms of text color, font size, and font family. Screen position for each caption is included with each timestamp.
I hope this experiment starts a conversation about the limits of single-style, bottom-center captioning. When we caption every screen — even (and especially) the screens encountered by the fictional character on our favorite programs — we help to normalize and center captioning as a regular design feature of the public sphere.