Selfie of Sean Zdenek taken on the balcony of a cruise ship with the Atlantic Ocean in the background

About Me

I am an associate professor of technical and professional writing at the University of Delaware. My research and teaching interests include technical writing, disability studies, sound studies, and rhetorical theory. My book, Reading Sounds: Closed-Captioned Media and Popular Culture (UChicago, 2015), won the 2017 award for best book in technical or scientific communication from the Conference on College Composition and Communication.

Published: Special issue on disability and technical communication

I guest edited a special issue of Communication Design Quarterly on “Reimagining Disability and Accessibility in Technical and Professional Communication” (volume 6, issue 4, December 2018). The issue includes an introduction and three articles on a range of cutting edge topics, including lip reading and interface design, subtitling and video accessibility across multiple languages, and cultivating virtuous course designers.

Browse the special issue: CDQ 6.4 (pdf).

Continue reading “Published: Special issue on disability and technical communication”
A frame from Star Wars: The Force Awakens featuring BB-8 and the caption: Chirps Inquisitively)

Chirp! Captioning BB-8 in The Force Awakens

The release of Star Wars: Episode VII – The Force Awakens on DVD and Blu-Ray last week gives us a welcome opportunity to take a much closer look at the closed captions.

The BB-8 droid provides an instructive case study. How do the closed captions convey the changing meanings and emotions of the droid’s electronic beeping sounds?

Read the full post on

A frame from Avatar (2009) featuring a close up of Neytiri

Tracking sonic timelines in closed captioning

Every sustained sound in the closed caption track creates a sonic timeline that continues to persist until it is terminated through a change in visual context or a stop caption. Multiple timelines may co-exist, with sustained sounds/captions building on each other. Sound is simultaneous, and one way of creating simultaneity on the caption track is by layering up sustained sounds.

Read the full post on

“The main factor that drives captioning quality is what clients are willing to pay for it.”

Recently, I received a thoughtful email from a professional closed captioner with over a decade of experience. Her message raises some important questions about the economics of closed captioning. She’s given me permission to post her message here, provided her contact info is removed and in the hopes that viewers will take a more active role in telling broadcasters and companies what kinds of captions they want.

Continue reading ““The main factor that drives captioning quality is what clients are willing to pay for it.””

A screenshot from Eagle Eye (2008)

Captioning 101: When music lyrics trigger an explosion, you just might want to caption them.

When music lyrics are instrumental to a film’s plot, they need to be captioned. It’s as simple as that. If captioners are responsible for captioning all significant sounds, then any sound that’s instrumental to the plot needs to be captioned.

Continue reading “Captioning 101: When music lyrics trigger an explosion, you just might want to caption them.”

A large, steel, industrial gear

Stylistic standards for closed captioning and data mining

When speaker IDs, musical lyrics, and sound descriptions have their own distinctive stylistic treatments, they can be extracted from closed caption files and studied as separate units of discourse. The only efficient and practical way to study hundreds or thousands of sound descriptions at one time is to use a program to separate speech from non-speech.

Continue reading “Stylistic standards for closed captioning and data mining”