About Sean Zdenek

Dr. Sean Zdenek is an associate professor of technical and professional writing at the University of Delaware. His research and teaching interests include technical writing, disability studies, sound studies, and rhetorical theory. His book, Reading Sounds: Closed-Captioned Media and Popular Culture (UChicago, 2015), won the 2017 award for best book in technical or scientific communication from the Conference on College Composition and Communication.

Published: Special issue on disability and technical communication

I guest edited a special issue of Communication Design Quarterly on “Reimagining Disability and Accessibility in Technical and Professional Communication” (volume 6, issue 4, December 2018). The issue includes an introduction and three articles on a range of cutting edge topics, including lip reading and interface design, subtitling and video accessibility across multiple languages, and cultivating virtuous course designers.

Browse the special issue: CDQ 6.4 (pdf).

A note on pdf accessibility: The issue’s contributors carefully prepared their Word documents to be accessible when converted to PDFs by including alt text for figures and semantic tagging for headings. Access to these features was lost when the Word files were formatted to the journal’s specifications. As a workaround, I integrated authors’ alt text into their figure captions. After the issue was published, I took some time with the published version of the issue to 1) run Adobe’s accessibility wizard, 2) manually tag all headings and tables, 3) fix reading order in places, and 4) test with a screen reader. I believe this version is much more screen reader friendly, but please let me know if any of the content in this pdf is inaccessible (zdenek@udel.edu).

A frame from Star Wars: The Force Awakens featuring BB-8 and the caption: Chirps Inquisitively)

Chirp! Captioning BB-8 in The Force Awakens

The release of Star Wars: Episode VII – The Force Awakens on DVD and Blu-Ray last week gives us a welcome opportunity to take a much closer look at the closed captions.

The BB-8 droid provides an instructive case study. How do the closed captions convey the changing meanings and emotions of the droid’s electronic beeping sounds?

Read the full post on ReadingSounds.net.

A frame from Avatar (2009) featuring a close up of Neytiri

Tracking sonic timelines in closed captioning

Every sustained sound in the closed caption track creates a sonic timeline that continues to persist until it is terminated through a change in visual context or a stop caption. Multiple timelines may co-exist, with sustained sounds/captions building on each other. Sound is simultaneous, and one way of creating simultaneity on the caption track is by layering up sustained sounds.

Read the full post on ReadingSounds.net

“The main factor that drives captioning quality is what clients are willing to pay for it.”

Recently, I received a thoughtful email from a professional closed captioner with over a decade of experience. Her message raises some important questions about the economics of closed captioning. She’s given me permission to post her message here, provided her contact info is removed and in the hopes that viewers will take a more active role in telling broadcasters and companies what kinds of captions they want.

Continue reading ““The main factor that drives captioning quality is what clients are willing to pay for it.””

A screenshot from Eagle Eye (2008)

Captioning 101: When music lyrics trigger an explosion, you just might want to caption them.

When music lyrics are instrumental to a film’s plot, they need to be captioned. It’s as simple as that. If captioners are responsible for captioning all significant sounds, then any sound that’s instrumental to the plot needs to be captioned.

Continue reading “Captioning 101: When music lyrics trigger an explosion, you just might want to caption them.”

A large, steel, industrial gear

Stylistic standards for closed captioning and data mining

When speaker IDs, musical lyrics, and sound descriptions have their own distinctive stylistic treatments, they can be extracted from closed caption files and studied as separate units of discourse. The only efficient and practical way to study hundreds or thousands of sound descriptions at one time is to use a program to separate speech from non-speech.

Continue reading “Stylistic standards for closed captioning and data mining”