<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://seanzdenek.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://seanzdenek.com/" rel="alternate" type="text/html" /><updated>2026-04-19T00:14:09+00:00</updated><id>https://seanzdenek.com/feed.xml</id><title type="html">Sean Zdenek</title><subtitle>Access, sound, captioning, film/media.</subtitle><author><name>Sean Zdenek</name></author><entry><title type="html">Caption every screen.</title><link href="https://seanzdenek.com/2020/03/22/caption-every-screen/" rel="alternate" type="text/html" title="Caption every screen." /><published>2020-03-22T00:00:00+00:00</published><updated>2020-03-22T00:00:00+00:00</updated><id>https://seanzdenek.com/2020/03/22/caption-every-screen</id><content type="html" xml:base="https://seanzdenek.com/2020/03/22/caption-every-screen/"><![CDATA[<p>The characters on our favorite television programs are just like us: they come home from work and stream their favorite TV shows and YouTube videos. But it&#8217;s hard for me to recall any programs that showed actors using captioned media. While the sounds emantating from their screens may be captioned for us, the sounds are not captioned for them. </p>

<p>Even as accessibility advocates make the case that captioning is a centerpiece of universal design, <strong>captioning doesn&#8217;t have a very high public profile</strong> . Yes, people tweet and write about the value of captioning every day, but we don&#8217;t have many opportunities to engage with captioned media in the public sphere. Only a handful of cities in the U.S. require captioning on all public screens (bars, waiting rooms, etc.). Outside of these cities &#8212; <a href="https://www.portlandoregon.gov/69431">Portland (Oregon)</a>, <a href="https://www.a2gov.org/departments/city-clerk/Documents/16-24%20Closed%20Captioning%20Ordinance%20Approval%20Notice.pdf">Ann Arbor</a>, <a href="https://www.cityofrochester.gov/article.aspx?id=8589973222">Rochester</a>, <a href="https://cabq.legistar.com/LegislationDetail.aspx?ID=4214613&amp;GUID=97D83AD7-94B8-4083-94F6-4B9450B67F13">Alburquerque</a>, <a href="https://www.seattle.gov/civilrights/civil-rights/new-laws-and-amendments/closed-captioning">Seattle</a>, <a href="https://sfgov.org/sfmdc/resolution-2008-03-board-supervisorsclosed-captioning">San Francisco</a> &#8212; we are likely to encounter public captioning only in airports and select bars and restaurants.  At the movie theater, good luck finding an <a href="https://www.regmovies.com/static/en/us/theatre/captioning-and-descriptive-video">open captioned showing</a> in your city.</p>



<p>When captions are enabled on the screens inside the fictional programs we watch, they don&#8217;t necessarily make these programs more accessible to us. But they do make captioning itself more visible. <strong>Captioning is normalized when it becomes something to be expected no matter where we encounter it</strong>. Producers also model more inclusive worlds (and help to counter the <a href="https://errorsofenchantment.com/closed-captioning-mandate-and-the-gop-a-sad-shade-of-the-same/">public&#8217;s resistance to open captioning</a>) when they depict characters on TV who consume captioned programming. </p>



<p>I&#8217;ve written about this issue before. In a 2018 article on &#8220;<a href="http://technorhetoric.net/23.1/topoi/zdenek/index.html">Designing Captions</a>,&#8221; I used the term &#8220;<a href="http://kairos.technorhetoric.net/23.1/topoi/zdenek/metacaptioning.html">meta captioning</a>&#8221; to refer to the &#8220;process of hard coding captions onto the screens displayed inside the screens we are watching.&#8221; Meta captioning puts captions on every screen, even the screens that characters aren&#8217;t paying attention to and viewers don&#8217;t have enough time to read. In other words, meta captions contribute to a scene&#8217;s inclusive ambience even though they are not always intended to be read like traditional speech captions. </p>



<p>I&#8217;m always a bit disappointed when I see people on TV interacting with uncaptioned media, despite the fact that our fictional worlds are assumed to be populated entirely &#8212; with few exceptions &#8212; by hearing people who don&#8217;t need captions (presumably). My first inclination is to re-caption these scenes by using a more robust captioning format or burning meta captions onto the digital screens in these fictional worlds. </p>



<p>Here&#8217;s an example from <em><a href="https://www.netflix.com/title/81060149">Horse Girl</a></em> (Netflix 2020) starring Alison Brie as a &#8220;sweet misfit with a fondness for crafts, horses and supernatural crime shows [who] finds her increasingly lucid dreams trickling into her waking life.&#8221; It&#8217;s the supernatural crime show part that&#8217;s relevant here.  In this scene, Brie is watching her favorite fictional program, <em>Purgatory</em>. We&#8217;re watching along with her, which means that the speech sounds on her television screen are intended to be read and understood by us (as opposed to appearing momentarily in the background of a scene as visual decoration).  In the<strong> original version</strong>, Brie is not watching <em>Purgatory</em> with captions: </p>



<video width="100%" height="100%" poster="/assets/images/blog/poster-Horsegirl-captioned.jpg" controls>
  <source src="/assets/images/blog/Horsegirl-TVcaptions-Captioned-Clip-720.mp4" type="video/mp4">
 Sorry, your browser doesn&#8217;t support embedded videos.
</video>
<p class=vidcaption>Source: <em>Horse Girl</em> (2020). Netflix. Original captions.</vidcaption></p>



<p>Let me offer a <strong>re-captioned intervention</strong> that uses a more versatile captioning format (<a href="https://www.w3.org/TR/webvtt1/">WebVTT</a>). In the video clip below, I experiment with screen positioning, font family, color, and type size. For color, I&#8217;m inspired by both the <a href="https://bbc.github.io/subtitle-guidelines/#Colours">BBC&#8217;s subtitling guidelines</a> for color (white, yellow, cyan, green) and <a href="https://readingsounds.net/when-a-yellow-subtitle-meets-a-character-from-the-simpsons/">Hulu&#8217;s default caption color</a>, which is a warm yellow (approx. red 255, green 204, blue 0). I haven&#8217;t tested these captions on multiple browsers or devices, and, honestly, I&#8217;m still learning and experimenting with WebVTT&#8217;s functionality for styling and positioning captions. If possible, <strong>please view the clip in Google Chrome</strong>, which supports WebVTT&#8217;s color and font styling, unlike Firefox. </p>



<style>
  video::cue(.Captions) { 
    color: white; 
  }
 video::cue(.TVCaptions-offscreen) { 
    color: white; 
font-size: 1.2em;
font-family: sans-serif; 
  }
 video::cue(.TVCaptions-onscreen) { 
    color: #ffcc00; 
font-size: .62em;
font-family: monospace;
  }
video::cue(.TVCaptions-onscreen-closeup) { 
    color: #ffcc00; 
font-size: 1em;
font-family: monospace;
  }
</style>
<video width="100%" height="100%" poster="/assets/images/blog/poster-Horsegirl-WebVTT-fullCaptions.jpg" controls>
  <source src="/assets/images/blog/Horsegirl-TVcaptions-uncaptioned-Clip-720.mp4" type="video/mp4">
<track label="English" kind="captions" srclang="en" src="https://seanzdenek.com//assets/images/blog/Horsegirl-WebVTT-full.vtt" default>
 Sorry, your browser doesn&#8217;t support embedded videos.
</video>
<p class=vidcaption>Source: <em>Horse Girl</em> (2020). Netflix. Re-captioned by the author using the WebVTT format.</vidcaption></p>



<p>Feel free to browse the <a href="/assets/images/blog/Horsegirl-WebVTT-full.vtt.html">WebVTT caption file</a> I created for this clip.  WebVTT supports simple style markup (identical to CSS styling). I created a different class for each caption style I needed for this clip. Classes included &#8220;TVCaptions-offscreen,&#8221; &#8220;TVCaptions-onscreen,&#8221; and &#8220;TVCaptions-onscreen-closeup.&#8221; Each class was defined in terms of text color, font size, and font family. Screen position for each caption is included with each timestamp.  </p>



<p>I hope this experiment starts a conversation about the limits of single-style, bottom-center captioning.  When we caption every screen &#8212; even (and especially) the screens encountered by the fictional character on our favorite programs &#8212; we help to normalize and center captioning as a regular design feature of the public sphere.</p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Visual Design" /><category term="Alison Brie" /><category term="Color" /><category term="Experimental" /><category term="Guidelines" /><category term="Horse girl" /><category term="Netflix" /><category term="Placement" /><category term="Positioning" /><category term="Style" /><summary type="html"><![CDATA[The characters on our favorite television programs are just like us: they come home from work and stream their favorite TV shows and YouTube videos. But it&#8217;s hard for me to recall any programs that showed actors using captioned media. While the sounds emantating from their screens may be captioned for us, the sounds are not captioned for them.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2020/03/feature-image-Horsegirl-1080x608-1-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2020/03/feature-image-Horsegirl-1080x608-1-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Positioning and styling captions when speakers overlap and interrupt each other</title><link href="https://seanzdenek.com/2020/03/20/positioning-and-styling-captions-when-speakers-overlap-and-interrupt/" rel="alternate" type="text/html" title="Positioning and styling captions when speakers overlap and interrupt each other" /><published>2020-03-20T00:00:00+00:00</published><updated>2020-03-20T00:00:00+00:00</updated><id>https://seanzdenek.com/2020/03/20/positioning-and-styling-captions-when-speakers-overlap-and-interrupt</id><content type="html" xml:base="https://seanzdenek.com/2020/03/20/positioning-and-styling-captions-when-speakers-overlap-and-interrupt/"><![CDATA[<p>It can be challenging to caption scenes with multiple speakers. Bottom-center caption placement is far from ideal for readers when it fails to clarify which captions belong to which speaker. Adding to the difficulty: speakers often talk quickly, interrupt each other, and overlap their speech to show collaborative support. When captions are placed underneath or next to each speaker, readers can more quickly distinguish &#8212; at a glance &#8212; who is speaking.</p>

<p>Screen placement is a core standard of caption quality. The FCC&#8217;s (2014) new rules for &#8220;<a href="https://www.fcc.gov/fcc-adopts-closed-captioning-quality-standards-tv-programs">closed captioning quality standards for TV programs</a>&#8221; require that captions should be Accurate, Synchronous, Complete, and Properly Placed. Regarding placement: &#8220;Captions should not block other important visual content on the screen, overlap one another, or run off the edge of the video screen&#8221; (<a href="https://docs.fcc.gov/public/attachments/DOC-325695A1.pdf">FCC</a>). </p>



<p><a href="https://www.amazon.com/Contagion-Marion-Cotillard/dp/B006IVBSBU"><em>Contagion</em></a> (2011), which we rented from Amazon Prime Video recently, provides quite a few examples of captions covering on-screen text. Because the on-screen text is low on the screen, and the captions are set exclusively in the bottom-center (default location), the captions partially cover this text at times. Whether the captions cover words on the screen depends on the device being used to view the movie and the caption size set by the user. I prefer large captions when watching programs on a large-screen television. Large captions are more likely to cover any low-set text.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="570" src="/assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1-1024x570.jpg" alt="A profile shot of Gweneth Paltrow in Contagion (2011). The on-screen text, Day 2, is partially covered by the closed captions: [Coughs] [Cell phone rings]" class="wp-image-6815" srcset="/assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1-1024x570.jpg 1024w, /assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1-300x167.jpg 300w, /assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1-768x427.jpg 768w, /assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1-1536x854.jpg 1536w, /assets/images/2020/03/Contagion-Coughs-CoversOnScreenText-1.jpg 1674w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>A caption partially covers &#8220;Day 2&#8221; in this frame from <em>Contagion</em> (2011). Source: Amazon Prime Video. Warner Bros.</figcaption></figure>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="557" src="/assets/images/2020/03/Contagion2a-CoversOnScreenText-1024x557.jpg" alt="A frame from Contagion (2011) showing a TV monitor mounted to a wall and the on-screen text, Day 8, which is partially covered by the closed caption: Chicago, Los Angeles, Boston, and Salt Lake." class="wp-image-6816" srcset="/assets/images/2020/03/Contagion2a-CoversOnScreenText-1024x557.jpg 1024w, /assets/images/2020/03/Contagion2a-CoversOnScreenText-300x163.jpg 300w, /assets/images/2020/03/Contagion2a-CoversOnScreenText-768x417.jpg 768w, /assets/images/2020/03/Contagion2a-CoversOnScreenText-1536x835.jpg 1536w, /assets/images/2020/03/Contagion2a-CoversOnScreenText.jpg 1674w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>Another caption partially covers on-screen text in <em>Contagion</em> (2011). Source: Amazon Prime Video. Warner Bros.</figcaption></figure>



<p>But the topic of placement goes beyond making sure that titles, names, chyrons, and other on-screen text are not obscured by the captions. Caption placement can help readers identify who is speaking when when multiple speakers are talking, interrupting, or overlapping their speech turns: </p>



<p><blockquote>When people onscreen speak simultaneously, place the captions underneath the speakers. If this is not possible due to the length of the caption or interference with onscreen graphics, caption each speaker at different timecodes. Do not use other speaker identification techniques, such as hyphens. (<a href="https://www.captioningkey.org/text.html#5">The Captioning Key</a>)</blockquote></p>



<p>Bottom-center captions can interfere with readers&#8217; attempts to associate lines of captioned dialogue with their respective speakers. In this scene from <em>Contagion</em> (original captions), Jude Law argues with a newspaper editor about the need to cover a developing story:</p>



<video width="100%" height="100%" poster="/assets/images/blog/poster-Contagion-captioned.jpg" controls>
  <source src="/assets/images/blog/Contagion-JournalistsTalkingOver-Captioned-CLIP-720.mp4" type="video/mp4">
 Sorry, your browser doesn&#8217;t support embedded videos.
</video>
<p class=vidcaption>Source: <em>Contagion</em> (2011). Amazon Prime Video. Warner Bros. Original captions.</vidcaption></p>



<p> This interaction is not too difficult to follow but could be improved by leveraging the power of placement. Single captions that blend the speech of multiple speakers can be confusing, especially in the absence of any distinguishing info such as preceding hyphens. For example, the following bottom-center captions combine speech from two different speakers, yet there are no visual cues in the captions to indicate which line(s) belong to which speaker: </p>



<p><blockquote>ALL OVER THE PLANET.<span style="color:red;font-weight:normal"> <-- Speaker 1</span><br>
WE DON&#8217;T WANT TO BE THE PAPER<span style="color:blue;font-weight:normal"> <-- Speaker 2</span><br>
THAT CRIES WOLF.</blockquote>

<blockquote>I TAPED THIS MEETING.<span style="color:red;font-weight:normal"> <-- Speaker 1</span><br>
WE NEED MORE INFORMATION<span style="color:blue;font-weight:normal"> <-- Speaker 2</span><br>
THAN THAT.</blockquote>
</p>



<p>Let&#8217;s re-caption this scene using a caption format such as <a href="https://www.w3.org/TR/webvtt1/">WebVTT</a> that supports more precise screen positioning options (view the following clips with the Chrome browser). </p>



<style>
  video::cue(.white) { 
    color: white; 
  }
</style>
<video width="100%" height="100%" controls poster="/assets/images/blog/poster-Contagion-WebVTT.jpg">
  <source src="/assets/images/blog/Contagion-JournalistsTalkingOver-Uncaptioned-CLIP-720.mp4" type="video/mp4">
<track label="English" kind="captions" srclang="en" src="https://seanzdenek.com//assets/images/blog/Contagion.vtt" default>
 Sorry, your browser doesn&#8217;t support embedded videos.
</video>
<p class=vidcaption>Source: <em>Contagion</em> (2011). Amazon Prime Video. Warner Bros. Re-captioned by the author using the WebVTT format.</vidcaption></p>



<p>We could go further and style each speaker&#8217;s captions in a different color or visual style. Distinguishing speakers by color is common in the UK &#8212; see the <a href="https://bbc.github.io/subtitle-guidelines/#Colours">BBC&#8217;s Subtitle Guidelines</a>, which list a &#8220;limited range of colours [that] can be used to distinguish speakers from each other.&#8221; The limited color palette includes (in order of priority): white, yellow, cyan, and green. These colors must appear on a black background.</p>



<style>
  video::cue(.Alan) { 
    color: white; 
  }
 video::cue(.Lorraine) { 
    color: #ffcc00; 
  }
</style>
<video width="100%" height="100%" controls poster="/assets/images/blog/poster-Contagion-WebVTT-color.jpg">
  <source src="/assets/images/blog/Contagion-JournalistsTalkingOver-Uncaptioned-CLIP-720.mp4" type="video/mp4">
<track label="English" kind="captions" srclang="en" src="https://seanzdenek.com//assets/images/blog/Contagion-Color.vtt" default>
 Sorry, your browser doesn&#8217;t support embedded videos.
</video>
<p class=vidcaption>Source: <em>Contagion</em> (2011). Amazon Prime Video.  Warner Bros. Re-captioned by the author using the WebVTT format.</vidcaption></p>



<p>Placement is meaningful.  <strong>When captions are placed on the screen strategically, they convey information through their form</strong>. Well-placed captions can help readers identify and distinguish speakers at a glance. What placement and color provide to readers is a more efficient method of speaker identification. Placement can&#8217;t and shouldn&#8217;t replace traditional <a href="https://www.captioningkey.org/speaker_identification.html">speaker identifiers</a>, of course. But placement can supplement other techniques without adding any additional words (proper names) or punctuation (hyphens) to an already jam-packed caption file. </p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Visual Design" /><category term="Contagion" /><category term="Gwyneth Paltrow" /><category term="Jude Law" /><category term="Pandemic" /><category term="Placement" /><summary type="html"><![CDATA[It can be challenging to caption scenes with multiple speakers. Bottom-center caption placement is far from ideal for readers when it fails to clarify which captions belong to which speaker. Adding to the difficulty: speakers often talk quickly, interrupt each other, and overlap their speech to show collaborative support. When captions are placed underneath or next to each speaker, readers can more quickly distinguish &#8212; at a glance &#8212; who is speaking.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2020/03/Contagion-Header-1080x601-1-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2020/03/Contagion-Header-1080x601-1-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">My interview with KairosCast</title><link href="https://seanzdenek.com/2019/02/06/published-podcast-interview/" rel="alternate" type="text/html" title="My interview with KairosCast" /><published>2019-02-06T00:00:00+00:00</published><updated>2019-02-06T00:00:00+00:00</updated><id>https://seanzdenek.com/2019/02/06/published-podcast-interview</id><content type="html" xml:base="https://seanzdenek.com/2019/02/06/published-podcast-interview/"><![CDATA[<p>I enjoyed talking with Courtney Danforth last summer about my captioning research. </p>



<p><strong>Check out the </strong><a href="http://kairos.technorhetoric.net/23.2/interviews/kcast/index.html"><strong>podcast</strong></a><strong> with transcript.  </strong></p>

<p>Danforth, C., Ferris, H. &amp; Bahl, E. 2019. KairosCast Interviews Sean Zdenek. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 23(2). <a href="http://kairos.technorhetoric.net/23.2/interviews/kcast/index.html">http://kairos.technorhetoric.net/23.2/interviews/kcast/index.html</a></p>



<p><strong>Image credit</strong>: &#8220;<a rel="noreferrer noopener" href="https://www.flickr.com/photos/40775084@N05/11127169383" target="_blank">Support Design</a>&#8221; by <a rel="noreferrer noopener" href="https://www.flickr.com/photos/40775084@N05" target="_blank">Kool Cats Photography over 14 Million Views</a> is licensed under <a rel="noreferrer noopener" href="https://creativecommons.org/licenses/by-nc/2.0/?ref=openverse" target="_blank">CC BY-NC 2.0</a>.</p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Podcasting" /><category term="Publishing" /><category term="Captioning" /><category term="Design" /><category term="Kairos" /><category term="Podcast" /><category term="research" /><summary type="html"><![CDATA[I enjoyed talking with Courtney Danforth last summer about my captioning research.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2019/02/11127169383_64e052fa2f_o-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2019/02/11127169383_64e052fa2f_o-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Published: Special issue on disability and technical communication</title><link href="https://seanzdenek.com/2019/02/06/published-special-issue-on-disability-and-technical-communication/" rel="alternate" type="text/html" title="Published: Special issue on disability and technical communication" /><published>2019-02-06T00:00:00+00:00</published><updated>2019-02-06T00:00:00+00:00</updated><id>https://seanzdenek.com/2019/02/06/published-special-issue-on-disability-and-technical-communication</id><content type="html" xml:base="https://seanzdenek.com/2019/02/06/published-special-issue-on-disability-and-technical-communication/"><![CDATA[<p>I guest edited a special issue of <em>Communication Design Quarterly</em> on &#8220;Reimagining Disability and Accessibility in Technical and Professional Communication&#8221; (volume 6, issue 4, December 2018). </p>

<p>The issue includes an introduction and three articles on a range of cutting edge topics, including lip reading and interface design, subtitling and video accessibility across multiple languages, and cultivating virtuous course designers. </p>



<p><strong>Browse the special issue: <a href="https://readingsounds.net//assets/images/CDQ-6-4-special-issue/CDQ6-4Dec2018-Accessible.pdf">CDQ 6.4 (pdf)</a></strong>. </p>



<p><strong>A note on pdf accessibility</strong>: The issue’s contributors carefully prepared their Word documents to be accessible when converted to PDFs by including alt text for figures and semantic tagging for headings. Access to these features was lost when the Word files were formatted to the journal’s specifications. As a workaround, I integrated authors’ alt text into their figure captions. After the issue was published, I took some time with the published version of the issue to 1) run Adobe&#8217;s accessibility wizard, 2) manually tag all headings and tables, 3) fix reading order in places, and 4) test with a screen reader. I believe this version is much more screen reader friendly, but please let me know if any of the content in this pdf is inaccessible (<a href="mailto:zdenek@udel.edu">zdenek@udel.edu</a>).</p>



<p><strong>Image credit</strong>: &#8220;<a rel="noreferrer noopener" href="https://www.flickr.com/photos/53986933@N00/8516781169" target="_blank">Architecture on LSD</a>&#8221; by&nbsp;<a rel="noreferrer noopener" href="https://www.flickr.com/photos/53986933@N00" target="_blank">snowpeak</a>&nbsp;is licensed under&nbsp;<a rel="noreferrer noopener" href="https://creativecommons.org/licenses/by/2.0/?ref=openverse" target="_blank">CC BY 2.0</a>.</p>]]></content><author><name>Sean Zdenek</name></author><category term="Accessibility" /><category term="Publishing" /><category term="Captioning" /><category term="Visual Design" /><category term="accessibility" /><category term="disability" /><category term="special issue" /><category term="technical and professional communication" /><summary type="html"><![CDATA[I guest edited a special issue of Communication Design Quarterly on &#8220;Reimagining Disability and Accessibility in Technical and Professional Communication&#8221; (volume 6, issue 4, December 2018).]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2019/02/8516781169_793b434daa_o-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2019/02/8516781169_793b434daa_o-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Published: Designing Captions — A new article on enhanced captioning</title><link href="https://seanzdenek.com/2018/08/17/designing-captions-a-new-article-on-enhanced-captioning/" rel="alternate" type="text/html" title="Published: Designing Captions — A new article on enhanced captioning" /><published>2018-08-17T00:00:00+00:00</published><updated>2018-08-17T00:00:00+00:00</updated><id>https://seanzdenek.com/2018/08/17/designing-captions-a-new-article-on-enhanced-captioning</id><content type="html" xml:base="https://seanzdenek.com/2018/08/17/designing-captions-a-new-article-on-enhanced-captioning/"><![CDATA[<p>Check out my new article on enhanced captioning, just published in <em>Kairos: A Journal of Rhetoric, Technology, and Pedaogogy</em> (23.1, 2018).</p>
<p><strong>Read the full article: &#8220;<a href="http://technorhetoric.net/23.1/topoi/zdenek/index.html">Designing captions: Disruptive experiments with typography, color, icons, and effects</a>.&#8221;</strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Publishing" /><category term="Visual Design" /><category term="Accessibility" /><summary type="html"><![CDATA[Check out my new article on enhanced captioning, just published in Kairos: A Journal of Rhetoric, Technology, and Pedaogogy (23.1, 2018). Read the full article: &#8220;Designing captions: Disruptive experiments with typography, color, icons, and effects.&#8221;]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2018/08/bakedin-rickandmorty-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2018/08/bakedin-rickandmorty-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Cripping closed captioning: Experiments with type, icons, and dynamic effects</title><link href="https://seanzdenek.com/2016/08/06/cripping-closed-captioning-experiments-with-type-icons-and-dynamic-effects/" rel="alternate" type="text/html" title="Cripping closed captioning: Experiments with type, icons, and dynamic effects" /><published>2016-08-06T00:00:00+00:00</published><updated>2016-08-06T00:00:00+00:00</updated><id>https://seanzdenek.com/2016/08/06/cripping-closed-captioning-experiments-with-type-icons-and-dynamic-effects</id><content type="html" xml:base="https://seanzdenek.com/2016/08/06/cripping-closed-captioning-experiments-with-type-icons-and-dynamic-effects/"><![CDATA[<p>Can we open closed captioning up to greater experimentation through the use of color, icons, typography, and basic animations to convey meaning?</p>
<p><strong>Read the full article at <a href="http://www.digitalrhetoriccollaborative.org/2016/07/26/cripping-closed-captioning-experiments-with-type-icons-and-dynamic-effects/">DigitalRhetoricCollaborative.org</a>.</strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Publishing" /><category term="Visual Design" /><summary type="html"><![CDATA[Can we open closed captioning up to greater experimentation through the use of color, icons, typography, and basic animations to convey meaning? Read the full article at DigitalRhetoricCollaborative.org.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2016/08/icons-bladerunner-homepage-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2016/08/icons-bladerunner-homepage-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Chirp! Captioning BB-8 in</title><link href="https://seanzdenek.com/2016/04/14/chirp-captioning-bb-8-in-the-force-awakens/" rel="alternate" type="text/html" title="Chirp! Captioning BB-8 in" /><published>2016-04-14T00:00:00+00:00</published><updated>2016-04-14T00:00:00+00:00</updated><id>https://seanzdenek.com/2016/04/14/chirp-captioning-bb-8-in-the-force-awakens</id><content type="html" xml:base="https://seanzdenek.com/2016/04/14/chirp-captioning-bb-8-in-the-force-awakens/"><![CDATA[<p>The release of <em>Star Wars: Episode VII &#8211; The Force Awakens</em> on DVD and Blu-Ray last week gives us a welcome opportunity to take a much closer look at the closed captions.</p>
<p>The BB-8 droid provides an instructive case study. How do the closed captions convey the changing meanings and emotions of the droid&#8217;s electronic beeping sounds?</p>
<p><strong>Read the full post on <a href="http://readingsounds.net/captioning-bb-8-in-the-force-awakens/">ReadingSounds.net</a>.</strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Data Mining" /><category term="Non-Speech" /><summary type="html"><![CDATA[The release of Star Wars: Episode VII &#8211; The Force Awakens on DVD and Blu-Ray last week gives us a welcome opportunity to take a much closer look at the closed captions. The BB-8 droid provides an instructive case study. How do the closed captions convey the changing meanings and emotions of the droid&#8217;s electronic beeping sounds? Read the full post on ReadingSounds.net.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2015/12/BB-8-HeaderImage-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2015/12/BB-8-HeaderImage-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Do sirens always wail?</title><link href="https://seanzdenek.com/2016/01/07/do-sirens-always-wail/" rel="alternate" type="text/html" title="Do sirens always wail?" /><published>2016-01-07T00:00:00+00:00</published><updated>2016-01-07T00:00:00+00:00</updated><id>https://seanzdenek.com/2016/01/07/do-sirens-always-wail</id><content type="html" xml:base="https://seanzdenek.com/2016/01/07/do-sirens-always-wail/"><![CDATA[<p>How often are sirens described as wailing in closed captioning? What else do sirens do in closed captioning other than wail? Does it matter? An analysis of nonspeech descriptions of siren sounds in a corpus of DVD caption files.</p>
<p><strong>Read the full post on <a href="http://readingsounds.net/do-sirens-always-wail/">ReadingSounds.net</a>.</strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Data Mining" /><category term="Non-Speech" /><category term="alarm sounds" /><category term="blaring" /><category term="nonspeech closed captioning" /><category term="police siren" /><category term="sirens" /><category term="wailing" /><summary type="html"><![CDATA[How often are sirens described as wailing in closed captioning? What else do sirens do in closed captioning other than wail? Does it matter? An analysis of nonspeech descriptions of siren sounds in a corpus of DVD caption files. Read the full post on ReadingSounds.net.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2016/01/Prophecy3-SirensWailingInDistance-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2016/01/Prophecy3-SirensWailingInDistance-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">When a yellow subtitle meets a character from</title><link href="https://seanzdenek.com/2015/12/18/when-a-yellow-subtitle-meets-a-character-from-the-simpsons/" rel="alternate" type="text/html" title="When a yellow subtitle meets a character from" /><published>2015-12-18T00:00:00+00:00</published><updated>2015-12-18T00:00:00+00:00</updated><id>https://seanzdenek.com/2015/12/18/when-a-yellow-subtitle-meets-a-character-from-the-simpsons</id><content type="html" xml:base="https://seanzdenek.com/2015/12/18/when-a-yellow-subtitle-meets-a-character-from-the-simpsons/"><![CDATA[<p>A comparison of the default yellow closed captions on Hulu.com with the yellow skin color of the animated characters on <em>The Simpsons</em>.</p>
<p><strong>Read the full post on <a href="http://readingsounds.net/when-a-yellow-subtitle-meets-a-character-from-the-simpsons/">ReadingSounds.net</a>.</strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Visual Design" /><category term="Stats" /><category term="Color" /><category term="Hulu" /><category term="Pantone 116 c" /><category term="Simpsons" /><category term="Yellow" /><summary type="html"><![CDATA[A comparison of the default yellow closed captions on Hulu.com with the yellow skin color of the animated characters on The Simpsons. Read the full post on ReadingSounds.net.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2015/12/HeaderImage-SimpsonsYellow-1440x600-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2015/12/HeaderImage-SimpsonsYellow-1440x600-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Tracking sonic timelines in closed captioning</title><link href="https://seanzdenek.com/2015/10/18/tracking-sonic-timelines-in-closed-captioning/" rel="alternate" type="text/html" title="Tracking sonic timelines in closed captioning" /><published>2015-10-18T00:00:00+00:00</published><updated>2015-10-18T00:00:00+00:00</updated><id>https://seanzdenek.com/2015/10/18/tracking-sonic-timelines-in-closed-captioning</id><content type="html" xml:base="https://seanzdenek.com/2015/10/18/tracking-sonic-timelines-in-closed-captioning/"><![CDATA[<p>Every sustained sound in the closed caption track creates a sonic timeline that continues to persist until it is terminated through a change in visual context or a stop caption. Multiple timelines may co-exist, with sustained sounds/captions building on each other. Sound is simultaneous, and one way of creating simultaneity on the caption track is by layering up sustained sounds.</p>
<p><strong>Read the full post on <a href="http://readingsounds.net/tracking-sonic-timelines-in-closed-captioning/">ReadingSounds.net</a></strong></p>]]></content><author><name>Sean Zdenek</name></author><category term="Captioning" /><category term="Non-Speech" /><category term="Aliens vs. Predator" /><category term="Avatar" /><category term="discrete sounds" /><category term="Inception" /><category term="Man of Steel" /><category term="Skyfall" /><category term="sonic timelines" /><category term="sustained sounds" /><summary type="html"><![CDATA[Every sustained sound in the closed caption track creates a sonic timeline that continues to persist until it is terminated through a change in visual context or a stop caption. Multiple timelines may co-exist, with sustained sounds/captions building on each other. Sound is simultaneous, and one way of creating simultaneity on the caption track is by layering up sustained sounds. Read the full post on ReadingSounds.net]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://seanzdenek.com/assets/images/2015/10/neytiri_avatar_1080p-HD-500x250.jpg" /><media:content medium="image" url="https://seanzdenek.com/assets/images/2015/10/neytiri_avatar_1080p-HD-500x250.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>