Ever Notice How Bad Closed Captioning Is? One Government Agency Is Doing Something About It

Feb 20, 2014  •  Post A Comment

Noting that closed captions on television "are often riddled with typos or incomplete sentences that leave viewers struggling to make sense of what’s being said," the Los Angeles Times’ Show Tracker reports that a key government agency has set its sights on improving the situation.

The Federal Communications Commission is expected to adopt new rules aimed at improving the quality of closed captioning, the story reports.

The report cites as an example a recent weather forecast on WeatherNation that was transcribed as “five wins and a very light power reese know,” which was supposed to say, “high winds and a very light, powdery snow."

“The FCC will require that captions must match spoken words in dialogue and convey background noises and other sounds to the fullest extent possible, according to agency officials familiar with the order,” the story reports.

The report adds: “The order will also mandate that captions not block other content on the screen, overlap one another, run off the edge of the video screen or be blocked by other information.”

The story notes that the proposed changes "sat in limbo for a decade," until new FCC Chairman Tom Wheeler, who has had the job for just a few months, started pushing to fast-track the regulations.

Thumbnail image for tom wheeler.pngTom Wheeler


  1. This doesn’t require Congressional action. All it needs is for someone to start caring about the quality of captioning. The captioners, as best as I can tell from correspondence with various providers, work under great time pressure and with minimal oversight (i.e., proofreaders). Worse are the content providers who create the captions in house (using interns? often, clearly young people) — unfondly do I remember season 1 of “Mad Men.” More recently, BBC America’s “Fleming” captions have added a degree of high comedy to the story.

  2. Captioning is often done in realtime, but people trained as court stenographers, especially for live and news reporting, tyoing the captions on phonetic keyboards as they listen to the audio, which sometimes can be difficult to hear. I am sure, as in most fields, there are GOOD stenographers and NOT SO GOOD stenographers. The quality of the captioning, I am sure, is greatly impacted by the level of skill of the stenographer involved.

  3. The effort required to create a closed caption file for insertion into the video file is very time consuming.
    The commonly used SCC format is an formatted text data file that is not human readable. The packing of data into a video frame means that the CC data that corresponds to a particular segment of video may not be stored in the frame for the video but could be upto 2 seconds later or earlier.

    It is not so much that the broadcasters don’t care, rather it is that it is so time consuming and costly to do it right that to do it at all seems a wonder.

    Typically someone is listening to the offair feed via a telephone headset and typing on a device that inserts the live type directly into the video stream post studio but pre-transmitter. There is no proof reading possible. As a lot of cable stations have a 7 second delay to avoid any wardrobe malfunction suits, it means that the closed caption operator is already behind the video feed by the time they get the audio. That is why it is so late on the screen.

    Mandating it should be more accurate is a waste of time. Unless you can implement speech to type recognition accurately, then it is unlikely that the quality will improve.

Your Comment

Email (will not be published)