To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.

Closed captioning

From Wikipedia, the free encyclopedia

The "CC in a TV" symbol was created at WGBH.
The "CC in a TV" symbol was created at WGBH.
The "Slashed ear" symbol is the International Symbol for Deafness used by TVNZ and other New Zealand broadcasters, as well as on VHS tapes released by Alliance Atlantis. The symbol was used on road signs to identify TTY access.
The "Slashed ear" symbol is the International Symbol for Deafness used by TVNZ and other New Zealand broadcasters, as well as on VHS tapes released by Alliance Atlantis. The symbol was used on road signs to identify TTY access.

Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have been to provide a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable. HTML5 defines subtitles as a "transcription or translation of the dialogue ... when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and captions as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information ... when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).[1]

YouTube Encyclopedic

  • 1/5
    293 089
    493 955
    34 829
    14 057
  • ✪ Closed Captioning: More Ingenious than You Know
  • ✪ Desativar closed caption na smart TV Philips
  • ✪ TV Closed Captioning demo from
  • ✪ Learn All About the Amazing Genie Remote | AT&T
  • ✪ How To - Turn On/Off Subtitles on Apple TV


[DTMF signals from the beginning of a Disney VHS tape] [if you remember those, you’re my kind of awesome] As you’ve gone through life as a high quality video connoisseur, you’ve no doubt seen the little toggle for turning captions, or subtitling, on and off. Here on YouTube, it’s right there. Or there. Depends on how you're using it. By hitting that button, your device will start displaying the text metadata embedded in the video stream that either YouTube has auto-generated, or that super awesome content creators like myself have put in so that you don’t have to simply watch a stream of unpunctuated words flow by like the world's largest run on sentence that never ends and which often contains incorrect words because the captoning system isn’t quite perfect and that while useful as a tool could really use… (jokes sometimes end up here, too!) But did you know that even the lowly VHS cassette often contains captions? And that a VCR as old as this dinosaur can make them work? That last statement is perhaps a bit disingenuous, but you’ll see what I mean shortly. Many VHS cassettes will have a little “CC” mark, or sometimes this mark. This tells you that the video was encoded with closed captioning. But, how? First, let’s go over a bit of terminology. The reason why these are called “closed captions” is that they are not normally visible. While this terminology is a little antiquated at this point, “captions” implies that there is actual text baked onto the video, for everyone to see. Sometimes these are called open captions, though that’s likely a retronym. The trouble with these sorts of captions [sound of glass breaking]. So, closed captioning hides the captions until and unless they are requested by the viewer. Believe it or not, the first television program in the US to be captioned at all was "The French Chef”, broadcast on PBS in 1971. Those were open captions, but nonetheless that was the first time that a US television program was accessible to the deaf. This is really surprising to me, because you’d think that one of the great benefits of television over radio was that non-hearing people could get some benefits from it, and yet it took until 1971 for anyone to take advantage of the visual component for this purpose. My how we take our senses for granted. Anyway, according to the National Captioning Institute, the idea that what would become closed captioning in the US started with an experiment by the Nationals Bureau of Standards and ABC television. They were attempting to broadcast the precise time using non-visible parts of the television signal. That idea never came to fruition, but it did lead to the idea of encoding text for the purpose of closed captioning. In the early 1970’s, more testing was done on various stations around the country, and in 1976 the FCC officially decided that Line 21 would be reserved for closed captioning. Eh, to explain what the means we need to go a little more low tech. Woah. Suddenly I fell kind of… interlaced. Great. This again. Well, at least I can say, “Look ma! I’m on TV!” An old-fashioned CRT television like this one is a pretty dumb device. There are no logic circuits in here, just a bunch of frequency oscillators, deflection coils, and other analog goodies. All around the part of the picture that you can see are blank sections that are used as triggers. I’ve made an earlier series on analog television that you can find if you’d like more information, but simply put, the image on the screen is made of one continuous line which is broken up and stacked 525 times per second. Other countries and continents used different standards, but we’re talking about the Freedom Captioning System here so we’re sticking with good ‘ol NTSC. The TV knows when to break the line up because throughout the television signal, low-intensity pulses (called the horizontal blanking interval) cause it to reset the drawing process, pulling the line back to the beginning, and starting from there. But the TV also needs to know when to start the next stack of 525, so another trigger, the vertical blanking interval, is used. This trigger is simply a group of lines that have no information in them at all. To keep you from seeing these triggers, the picture tube is overscanned so they appear outside the borders of the screen. Closed captioning simply uses one of the unused and unseen lines to digitally encode text. This worked because a TV doesn’t need that many blank lines to reset the vertical deflection--in reality it just needs a few. So sacrificing one of them wouldn’t hurt compatibility. The first demonstrations of this experimental system occured in 1972, and in the following year Washington DC’s public television station WETA started doing test broadcasts with the data transmitted on line 21. In 1976 the FCC decided to standardize that, and from then on Line 21 was reserved exclusively for closed captioning information. Developments to the captioning technology continued to be made throughout the 1970’s and early 1980’s. Much of the work was done by engineers at PBS, specifically at WGBH in Boston, with an important development being the editing consoles used for inserting closed captioning data into pre-recorded material. By the end of the 1970’s, closed-captioning advocates decided that they should form a nonprofit group, with the goal of creating the standards for the system and encouraging larger broadcasters to get onboard. So, the National Captioning Institute was formed in 1979, and in 1980 the first television series to be fully closed captioned was broadcast, and this also appears to be the year that the first closed captioned home video releases made it onto the market. This symbol here is actually a service mark of the National Captioning Institute. I always thought it was just a generic mark indicating closed captioning, but apparently it meant that the NCI is who actually created the captions for the program. So that’s pretty neat. This mark, designed by Jack Foley of WGBH, was placed in the public domain and thus became fairly standard as well. Now that we know a bit more about its history, let’s take a closer look at how it works. But first, I gotta get out of this thing. There’s no place like 4K! There’s no place like 4K! There’s no place like 4K! If I mess with the vertical synchronization of this television, the image will roll and the vertical blanking interval can be seen. On this line, which happens to be line 21, you’ll see that occasionally, it bursts into a black and white scramble of dots. This is where the closed captioning is. We’re seeing this visually, as after all the picture tube is still gonna react to it as if it were visual information. But don’t think that it’s being read literally like a barcode--though it looks like one, what this really is is a tiny portion of the signal going high-low-high-low. In other words, it’s a digital datastream. But how do we read that and turn it into captions? Great question! We need a CLOSED CAPTIONING DECODER! Luckily I have one right here. This is the National Captioning Institute’s very own TeleCaption 3000. Glad they used 3000 because it still sounds futuristic! These were actually built by Sanyo, and they weren’t cheap. Costing around $200, they cost more than some basic televisions. There’s a great article from the Chicago Tribune written at the time of this unit’s introduction which has lots of cool info, and you can check it out in the description if you’d like to learn more. Although they were expensive, they did at least include some nice features, as we’ll see shortly. This device is able to read the data from line 21, and generate text on-screen. Remember, that data looks like a bar-code, but it’s processing that as a datastream because… OK now that I think about it… all barcodes kind of work that way, at least with the laser-type scanner thing with the spinning mirror--but, anyway! It’s reading that data. Alright. That data encodes text and timing information, and after determining what it has to do, the device draws blocks of text on top of the screen, providing a black background to ensure it’s visible. Now, this may seem pretty simple, but this device actually has a pretty sophisticated job to do. See, it can only only look at one channel at a time, so it needs to have its own television tuner built-in. Then, it has to spit out the same picture to the TV, while altering it to contain the captioning. And it has to do that in real time with technology of the 1970’s. Of course, this device isn’t nearly that old--it was made in 1989 and by that time the electronics weren’t so crazy-advanced--but the way the NCI designed this device is awesome. Take a look on the back and you’ll find a coaxial input from the antenna, and another output to go to the TV. You’ve got your standard channel 3 or 4 selector (used in case your local market had a station broadcasting on either one of those channels), but what’s really clever is this switched outlet on the back. Since they had to integrate a television tuner and it would be outputting the altered signal as if it were, say, a VCR, they designed it to give it a little bit of extra functionality. If you plug your TV into the outlet on the back, then you can use the TeleCaption 3000 to give it a remote control! See, the TeleCaption has its own remote, and when you turn it off, it kills the power to that outlet. So, if you leave your TV turned on, then you could simply turn your TeleCaption on and your TV would automatically come to life. Better still, they added a volume control, so if you left your TV on, set to channel 3, and with the volume on high, you could control everything--the channel, the volume, the TV’s power, and of course the captions--with this remote. I really appreciate this thinking on the part of NCI, as even really old televisions with rotary knobs could now be operated completely via remote control. Nice going, NCI! One slightly strange thing about this device is the inclusion of adjustments for the background as well as the text. I wasn’t really sure what to expect from these, but apparently they allow you to make the background lighter and the text darker. I’m not really sure why you’d want to do this, but I suppose a grey background is a little less intrusive. And maybe some old sets would be affected by the black box and skew the image geometry a bit. No matter the reason, options are welcome. Buying one of these decoders wasn’t your only option, however. Some high-end VCRs were available with built-in closed captioning decoder circuitry, but apparently this wasn’t well promoted and they never sold well. In any case, this machine will work with a standard VCR, and given that it creates a value-add in the form of a television upgrade, I think it was among the best options. So let’s look a bit more at the captions themselves. One of the great things about the closed captioning protocol is that the text doesn’t just get plastered anywhere on the screen. Its location could be defined. This meant that the text could shift left and right to give an indication of which character is talking. Which is pretty handy. And the text could even be placed up high so that it wouldn’t obscure other visual information on screen. It’s pretty impressive that this encoding was thought out for a captioning system with roots in the 1970’s, and it’s also frustrating that YouTube captions don’t allow you to do this. That’s right YouTube, this old VCR and this box are more sophisticated than your captioning system. Maybe work on that. Now, this may look like it’s limited to all-caps, but in fact the captions do support mixed-case. It just tended to be used for descriptions rather than dialogue. Not sure why, and there are exceptions to that rule, but the default seems to be all-caps. Of course with today’s context, it seems like everyone is shouting. One of the more surprising aspects of this system was how robust it was. This is a pretty old VHS cassette with a recording made from live TV, done with not the greatest incoming reception, and recorded on the low-quality SLP speed. The captions still come through fine, and with few errors. Even in areas far from the transmitters, the closed captioning system would still work reliably. Of course, for many years, the likelihood that a program would actually contain captioning would be pretty slim. It took a lot of effort from the NCI and other organizations to persuade broadcasters to start using it. But over time, it began to spread. Every home video format began to support it. You can even find CEDs with closed captioning, like this copy of Robin Hood bearing the NCI’s service mark. CEDs? You mean those things from RCA? That really failed format that was video on a vinyl disc? Is he gonna do a video about it sometime? Yes. Another big development was the real-time captioning console, which allowed for captioning of live and unscripted events. Stenographers were hired to furiously spit out captions of newscasts, sporting events, and other unscripted programs. Of course, accuracy wasn’t exactly perfect, and one of the downsides was that the captions would often appear with a fair delay, but hey, it was better than nothing. But the real windfall came in early 1991 when Congress finally got around to passing the Television Circuitry Decoder Act of 1990. This required all televisions of screen size 13 inches or greater to include the ability to display closed captioning. The law went into effect on July 1st, 1993, and from that point on the important accessibility feature became… more accessible. This TV was made in 1994, and sure enough--closed captioning can be turned on via a few button presses on the remote. When DVD rolled along in 1996, suddenly closed captioning got a whole lot more interesting. And for a number of reasons. First, DVD players natively support subtitles. Subtitles in a DVD work very differently from the Line 21 closed captioning, as they are stored on the DVD as series of images. When you toggle them on, the DVD player is stamping semi-transparent bitmaps on top of the video, and because of this many languages could be supported without needing to have support for their character sets. This also explains why the subtitles can look so different from DVD to DVD. But--DVD still supported line 21 closed captioning. That’s pretty remarkable once you realize that in order to do that, the DVD player has to do its own line 21 encoding from metadata on the disc. The video files on the disc don’t contain anything outside the frame, so the DVD player itself has to generate the vertical blanking interval and shove that captioning data into line 21. This just happened in the background, enabling those who relied on captions to use their existing equipment if they preferred. This was also useful in a scenario like a hotel, where a central DVD player might be broadcasting the same program to many televisions. Then only those who wanted captions would view them. Of course, now I needed to know if Blu-Ray supported line 21 captions. It looks like it doesn’t given that over HDMI there is no vertical blanking interval, and being an HD format what’s the point? BUT! The PlayStation 3 has a composite output, so let’s see if anything happens. Oh. And just as another test--would the PS3 send Line 21 captions out with DVDs? Looks like it does! Look at Sony being all backward compatible. Wait… Did you know that closed captioning systems exist for movie theaters? I’ve only ever seen these systems in person at theme park shows, but the system is very clever. At the rear of the theater is an LED dot matrix display. This display is showing subtitles for the film...backwards. The idea is that, if you need captions, the theater will provide you with a rectangular piece of plexiglass on a stand, and by placing this in front of you, you can aim it to reflect an image of that display into your eyes, either below the screen or on top of it if you choose. And since the reflection reverses the image, the previously backwards text appears forwards. The most common of these systems is the Rear Window Captioning System, also created in part by WGBH Boston. Jeez, those folks are nice. Anyway, if you’ve ever looked behind you and saw a display board with backwards subtitles--now you know what that’s for. Online streaming services seem to handle closed captioning differently. One thing I’ve noticed with the way Netflix handles it, is that the captions can be forced on. I’m pretty sure Blu-Ray can do this, too--and maybe even DVDs. The practical upshot of doing this is that any on-screen text, such as can be automatically translated into a different language if a different audio track is selected. That’s pretty handy. One thing that the hearing impaired can look forward to is real-time captioning. I’m sure this is already in development, but this would be a socially acceptable use of something like Google Glass. With speech-recognition technology as advanced as it already is, I suspect it will not be long before a simple glasses-like device enables real-time captioning. Oh wait, of course it’s already a thing and you can find some links in the description. But before I go--did you notice that there were three modes on this closed captioning decoder? There’s TV, which does nothing. There’s Caption which does… captions. And then, there’s Text. What happens if I turn that mode on? Interesting. A black box covers almost all of the screen. What could that be for? That’s right James. You’ve waited so long. It’s coming. Soon, we’ll talk about Teletext, the much more advanced sibling to closed captioning that much of Europe enjoyed, but which never caught on here in the States. So, uh, stay tuned for that. Thanks for watching, and I hope you enjoyed the video! I think that closed captions are now something we very much take for granted, and the hearing impaired are very thankful that we do. Though it took an act of Congress, I’m glad that we finally decided to give the hearing impaired the same access to television that the rest of us have. As always, thank you to everyone who supports the channel on Patreon, especially the fine folks that are scrolling up your screen. With the support of people just like you, Technology Connections has gone from my hobby to, well, this! And I’m very thankful for that. If you would like to support the channel and get perks like early video access, behind-the-scenes stuff, as well as other Patreon-exclusive content, please check out my Patreon page. Thank you for your consideration, and I’ll see you next time! ♫ insufferably smooth jazz ♫ I accidentally discovered something while shooting B-roll for this video. Earlier I made a video on a little something called Macrovision, an analog copy protection scheme. This deliberately screwed with the vertical blanking interval to confuse VCRs, and indeed it created a few problems for my capturing process this time. To help make the caption data easier to see, I tried playing a Laserdisc which is known for not having Macrovision. And now, I think I know why. Laserdiscs are chock full of other barcode-like things in the blanking interval, and judging by how they are jittering around, I’m thinking this is what the player looks for when skipping tracks or even just looking at the timecode. And, assuming this is true, this would explain why Macrovision never made it to Laserdisc. Laserdisc had already done its own screwing around with the vertical blanking interval--in its case to create the control system for the format--and trying to put Macrovision in there would probably break it. And while we’re on the subject of closed captioning, I’ve finally turned on community subtitle submissions for the channel. If you speak another language and are willing to help out by translating my videos into that second language, please check out the link below with instructions on how to do so. This would be really helpful to me to give exposure to the channel in other countries, as well as just being pretty neat. If you decide to do it, I promise you’ll get your very own Unofficial Official Imaginary Badge of Complete Awesomeness. You’ll know it’s true so long as you believe. Hey, for everyone who used these captions--I'm sorry they were on top of other captions so frequently. I REALLY REALLY wish that YouTube would let me move them around like.. you know... like we could in 1980. That'd be swell.



The term "closed" (versus "open") indicates that the captions are not visible until activated by the viewer, usually via the remote control or menu option. On the other hand, "open", "burned-in", "baked on", "hard-coded", or simply "hard" captions are visible to all viewers.

Most of the world does not distinguish captions from subtitles.[citation needed] In the United States and Canada, these terms do have different meanings. "Subtitles" assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they transcribe only dialogue and some on-screen text. "Captions" aim to describe to the deaf and hard of hearing all significant audio content - spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking - along with any significant music or sound effects using words or symbols. Also, the term closed caption has come to be used to also refer to the North American EIA-608 encoding that is used with NTSC-compatible video.

The United Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions and use "subtitles" as the general term. The equivalent of "captioning" is usually referred to as "subtitles for the hard of hearing". Their presence is referenced on screen by notation which says "Subtitles", or previously "Subtitles 888" or just "888" (the latter two are in reference to the conventional teletext channel for captions), which is why the term subtitle is also used to refer to the Ceefax-based Teletext encoding that is used with PAL-compatible video. The term subtitle has been replaced with caption in a number of markets - such as Australia and New Zealand - that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have captioning and which do not.

Remote control handsets for TVs, DVDs, and similar devices in most European markets often use "SUB" or "SUBTITLE" on the button used to control the display of subtitles/captions.


Open captioning

Regular open-captioned broadcasts began on PBS's The French Chef in 1972.[2] WGBH began open captioning of the programs Zoom, ABC World News Tonight, and Once Upon a Classic shortly thereafter.

Technical development of closed captioning

Closed captioning was first demonstrated at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee in 1971.[2] A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972, where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of The Mod Squad.

The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA.[2] As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.

Real-time captioning, a process for captioning live broadcasts, was developed by the National Captioning Institute in 1982.[2] In real-time captioning, court reporters trained to write at speeds of over 225 words per minute give viewers instantaneous access to live news, sports, and entertainment. As a result, the viewer sees the captions within two to three seconds of the words being spoken.

Major US producers of captions are WGBH-TV, VITAC, CaptionMax and the National Captioning Institute. In the UK and Australasia, Red Bee Media, itfc, and Independent Media Support are the major vendors.

Improvements in speech recognition technology means that live captioning may be fully or partially automated. BBC Sport broadcasts use a "respeaker": a trained human who repeats the running commentary (with careful enunciation and some simplification and markup) for input to the automated text generation system. This is generally reliable, though errors are not unknown.[3]

Full-scale closed captioning

The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.[4]

The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980.[5] Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were a Disney's Wonderful World presentation of the film Son of Flubber on NBC, an ABC Sunday Night Movie airing of Semi-Tough, and Masterpiece Theatre on PBS.[6]

Legislative development in the U.S.

Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the Television Decoder Circuitry Act of 1990 was passed by Congress.[2] This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of Closed Captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.[7]

Also, in 1990, the Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities.[4] The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities—such as hospitals, bars, shopping centers and museums (but not movie theaters)—provide access to verbal information on televisions, films or slide shows.

The Telecommunications Act of 1996 expanded on the Decoder Circuity Act to place the same requirements on digital television receivers by July 1, 2002.[8] All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.[9]

A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010.[10] A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010, by the House of Representatives on September 28, 2010, and was signed by President Barack Obama on October 8, 2010. The Act requires, in part, for ATSC-decoding set-top box remotes to have a button to turn on or off the closed captioning in the output signal. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.[11]

On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning,[12] addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions.


As amended by RA 10905, all TV networks in the Philippines are required to give CC.[13] As of 2018, the three major TV networks in the country are currently testing the closed captioning system on their transmissions. ABS-CBN added CC in their daily 3 O'Clock Habit in the afternoon. 5 started implementing CCs on their live noon and nightly news programs. GMA was once started broadcasting nightly and late night news programs, but then they stopped adding CCs lately. Only select Korean drama and local or foreign movies, Biyahe ni Drew (English: Drew's Explorations) and Idol sa Kusina (English: Kitchen Idol) are the programs and shows that they air with proper closed captioning.[14]

Legislative development in Australia

The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services in Australia today.[15][16][17]

Funding development in New Zealand

In 1981, TVNZ held a telethon to raise funds for Teletext-encoding equipment used for the creation and editing of text-based broadcast services for the deaf. The service came into use in 1984 with caption creation and importing paid for as part of the public broadcasting fee until the creation of the NZ on Air taxpayer fund, which is used to provide captioning for NZ On Air content, TVNZ news shows and conversion of EIA-608 US captions to the preferred EBU STL format for only TVNZ 1, TV 2 and TV 3 with archived captions available to FOUR and select Sky programming. During the second half of 2012, TV3 and FOUR began providing non-Teletext DVB image-based captions on their HD service and used the same format on the satellite service, which has since caused major timing issues in relation to server load and the loss of captions from most SD DVB-S receivers, such as the ones Sky Television provides their customers. As of April 2, 2013, only the Teletext page 801 caption service will remain in use with the informational Teletext non-caption content being discontinued.


Closed captions were created for deaf or hard of hearing individuals to assist in comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.

In the United States, the National Captioning Institute noted that English as a foreign or second language (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.[18]

Closed captions are also used in public environments, such as bars and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs. In addition, online videos may be treated through digital processing of their audio content by various robotic algorithms (robots). Multiple chains of errors are the result. When a video is truly and accurately transcribed, then the closed-captioning publication serves a useful purpose, and the content is available for search engines to index and make available to users on the internet.[19][20][21]

Some television sets can be set to automatically turn captioning on when the volume is muted.

Television and video

For live programs, spoken words comprising the television program's soundtrack are transcribed by a human operator (a speech-to-text reporter) using stenotype or stenomask type of machines, whose phonetic output is instantly translated into text by a computer and displayed on the screen. This technique was developed in the 1970s as an initiative of the BBC's Ceefax teletext service.[22] In collaboration with the BBC, a university student took on the research project of writing the first phonetics-to-text conversion program for this purpose. Sometimes, the captions of live broadcasts, like news bulletins, sports events, live entertainment shows, and other live shows, fall behind by a few seconds. This delay is because the machine does not know what the person is going to say next, so after the person on the show says the sentence, the captions appear.[23] Automatic computer speech recognition works well when trained to recognize a single voice, and so since 2003, the BBC does live subtitling by having someone re-speak what is being broadcast. Live captioning is also a form of real-time text. Meanwhile, sport events on ESPN are using court reporters, using a special (steno) keyboard and individually constructed "dictionaries."

In some cases, the transcript is available beforehand, and captions are simply displayed during the program after being edited. For programs that have a mix of pre-prepared and live content, such as news bulletins, a combination of techniques is used.

For prerecorded programs, commercials, and home videos, audio is transcribed and captions are prepared, positioned, and timed in advance.

For all types of NTSC programming, captions are "encoded" into line 21 of the vertical blanking interval - a part of the TV picture that sits just above the visible portion and is usually unseen. For ATSC (digital television) programming, three streams are encoded in the video: two are backward compatible "line 21" captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format.[24]

Captioning is modulated and stored differently in PAL and SECAM 625 line 25 frame countries, where teletext is used rather than in EIA-608, but the methods of preparation and the line 21 field used are similar. For home Betamax and VHS videotapes, a shift down of this line 21 field must be done due to the greater number of VBI lines used in 625 line PAL countries, though only a small minority of European PAL VHS machines support this (or any) format for closed caption recording. Like all teletext fields, teletext captions can't be stored by a standard 625 line VHS recorder (due to the lack of field shifting support); they are available on all professional S-VHS recordings due to all fields being recorded. Recorded Teletext caption fields also suffer from a higher number of caption errors due to increased number of bits and a low SNR, especially on low-bandwidth VHS. This is why Teletext captions used to be stored separately on floppy disk to the analogue master tape. DVDs have their own system for subtitles and captions, which are digitally inserted in the data stream and decoded on playback into video.

For older televisions, a set-top box or other decoder is usually required. In the US, since the passage of the Television Decoder Circuitry Act, manufacturers of most television receivers sold have been required to include closed captioning display capability. High-definition TV sets, receivers, and tuner cards are also covered, though the technical specifications are different (high-definition display screens, as opposed to high-definition TVs, may lack captioning). Canada has no similar law but receives the same sets as the US in most cases.

During transmission, single byte errors can be replaced by a white space which can appear at the beginning of the program. More byte errors during EIA-608 transmission can affect the screen momentarily, by defaulting to a real-time mode such as the "roll up" style, type random letters on screen, and then revert to normal. Uncorrectable byte errors within the teletext page header will cause whole captions to be dropped. EIA-608, due to using only two characters per video frame, sends these captions ahead of time storing them in a second buffer awaiting a command to display them; Teletext sends these in real-time.

The use of capitalization varies among caption providers. Most caption providers capitalize all words while others such as WGBH and non-US providers prefer to use mixed-case letters.

There are two main styles of line 21 closed captioning:

  • Roll-up or scroll-up or paint-on or scrolling: Real-time words sent in paint-on or scrolling mode appear from left to right, up to one line at a time; when a line is filled in roll-up mode, the whole line scrolls up to make way for a new line, and the line on top is erased. The lines usually appear at the bottom of the screen, but can actually be placed on any of the 14 screen rows to avoid covering graphics or action. This method is used when captioning video in real-time such as for live events, where a sequential word-by-word captioning process is needed or a pre-made intermediary file isn't available. This method is signaled on EIA-608 by a two-byte caption command or in Teletext by replacing rows for a roll-up effect and duplicating rows for a paint-on effect. This allows for real-time caption line editing.
A still frame showing simulated closed captioning in the pop-on style
A still frame showing simulated closed captioning in the pop-on style
  • Pop-on or pop-up or block: A caption appears on any of the 14 screen rows as a complete sentence, which can be followed by additional captions. This method is used when captions come from an intermediary file (such as the Scenarist or EBU STL file formats) for pre-taped television and film programming, commonly produced at captioning facilities. This method of captioning can be aided by digital scripts or voice recognition software, and if used for live events, would require a video delay to avoid a large delay in the captions' appearance on-screen, which occurs with Teletext-encoded live subtitles.

Caption formatting

TVNZ Access Services and Red Bee Media for BBC and Australia example:

I got the machine ready.
            (speeding away)

UK IMS for ITV and Sky example:

(man) I got the machine ready. (engine starting)

US WGBH Access Services example:

MAN: I got the machine ready.
            (engine starting)

US National Captioning Institute example:


US other provider example:

            [engine starting]

US in-house real-time roll-up example:

[engine starting]

Non-US in-house real-time roll-up example:

    MAN: I got the machine ready.
            (ENGINE STARTING)


For real-time captioning done outside of captioning facilities, the following syntax is used:

  • '>>' (two prefixed greater-than signs) indicates a change in single speaker.
    • Sometimes appended with the speaker's name in alternate case, followed by a colon.
  • '>>>' (three prefixed greater-than signs) indicates a change in news story or multiple speakers.

Styles of syntax that are used by various captioning producers:

  • Capitals indicate main on-screen dialogue and the name of the speaker.
    • Legacy EIA-608 home caption decoder fonts had no descenders on lowercase letters.
    • Outside North America, capitals with background coloration indicate a song title or sound effect description.
    • Outside North America, capitals with black or no background coloration indicates when a word is stressed or emphasized.
  • Descenders indicate background sound description and off-screen dialogue.
    • Most modern caption producers, such as WGBH-TV, use mixed case for both on-screen and off-screen dialogue.
  • '-' (a prefixed dash) indicates a change in single speaker (used by CaptionMax).
  • Words in italics indicate when a word is stressed or emphasized and when real world names are quoted.
    • Italics and bold type are only supported by EIA-608.
    • Some North American providers use this for narrated dialogue.
  • Text coloration indicates captioning credits and sponsorship.
    • Occasionally, it is for a karaoke effect for music videos on MTV or VH-1.
    • In Ceefax/Teletext countries, it indicates a change in single speaker in place of '>>'.
    • Some Teletext countries use coloration to indicate when a word is stressed or emphasized.
    • Coloration is limited to white, green, blue, cyan, red, yellow and magenta.
    • UK order of use for text is white, green, cyan, yellow; and backgrounds is black, red, blue, magenta, white.
    • US order of use for text is white, yellow, cyan, green; and backgrounds is black, blue, red, magenta, white.
  • Square brackets or parentheses indicate a song title or sound effect description.
  • Parentheses indicate speaker's vocal pitch e.g., (man), (woman), (boy) or (girl).
    • Outside North America, parentheses indicate a silent on-screen action.
  • A pair of eighth notes is used to bracket a line of lyrics to indicate singing.
    • A pair of eighth notes on a line of no text are used during a section of instrumental music.
    • Outside North America, a single number sign is used on a line of lyrics to indicate singing.
    • An additional musical notation character is appended to the end of the last line of lyrics to indicate the song's end.
    • As the symbol is unsupported by Ceefax/Teletext, a number sign - which resembles a musical sharp - is substituted.

Technical aspects

There were many shortcomings in the original Line 21 specification from a typographic standpoint, since, for example, it lacked many of the characters required for captioning in languages other than English. Since that time, the core Line 21 character set has been expanded to include quite a few more characters, handling most requirements for languages common in North and South America such as French, Spanish, and Portuguese, though those extended characters are not required in all decoders and are thus unreliable in everyday use. The problem has been almost eliminated with a market specific full set of Western European characters and a private adopted Norpak extension for South Korean and Japanese markets. The full EIA-708 standard for digital television has worldwide character set support, but there has been little use of it due to EBU Teletext dominating DVB countries, which has its own extended character sets.

Captions are often edited to make them easier to read and to reduce the amount of text displayed onscreen. This editing can be very minor, with only a few occasional unimportant missed lines, to severe, where virtually every line spoken by the actors is condensed. The measure used to guide this editing is words per minute, commonly varying from 180 to 300, depending on the type of program. Offensive words are also captioned, but if the program is censored for TV broadcast, the broadcaster might not have arranged for the captioning to be edited or censored also. The "TV Guardian", a television set-top box, is available to parents who wish to censor offensive language of programs-the video signal is fed into the box and if it detects an offensive word in the captioning, the audio signal is bleeped or muted for that period of time.

Caption channels

A bug touting CC1 and CC3 captions (on Telemundo)
A bug touting CC1 and CC3 captions (on Telemundo)

The Line 21 data stream can consist of data from several data channels multiplexed together. Odd field 1 can have four data channels: two separate synchronized captions (CC1, CC2) with caption-related text, such as website URLs (T1, T2). Even field 2 can have five additional data channels: two separate synchronized captions (CC3, CC4) with caption related text (T3, T4), and Extended Data Services (XDS) for Now/Next EPG details. XDS data structure is defined in CEA-608.

As CC1 and CC2 share bandwidth, if there is a lot of data in CC1, there will be little room for CC2 data and is generally only used for the primary audio captions. Similarly, CC3 and CC4 share the second even field of line 21. Since some early caption decoders supported only single field decoding of CC1 and CC2, captions for SAP in a second language were often placed in CC2. This led to bandwidth problems, and the U.S. Federal Communications Commission (FCC) recommendation is that bilingual programming should have the second caption language in CC3. Many Spanish television networks such as Univision and Telemundo, for example, provides English subtitles for many of its Spanish programs in CC3. Canadian broadcasters use CC3 for French translated SAPs, which is also a similar practice in South Korea and Japan.

Ceefax and Teletext can have a larger number of captions for other languages due to the use of multiple VBI lines. However, only European countries used a second subtitle page for second language audio tracks where either the NICAM dual mono or Zweikanalton were used.

Digital television interoperability issues


The US ATSC digital television system originally specified two different kinds of closed captioning datastream standards: the original analog-compatible (available by Line 21) and the more modern digital-only CEA-708 formats are delivered within the video stream.[24] The US FCC mandates that broadcasters deliver (and generate, if necessary) both datastream formats with the CEA-708 format merely a conversion of the Line 21 format.[24] The Canadian CRTC has not mandated that broadcasters either broadcast both datastream formats or exclusively in one format. Most broadcasters and networks to avoid large conversion cost outlays just provide EIA-608 captions along with a transcoded CEA-708 version encapsulated within CEA-708 packets.

Incompatibility issues with digital TV

Many viewers find that when they acquire a digital television or set-top box they are unable to view closed caption (CC) information, even though the broadcaster is sending it and the TV is able to display it.

Originally, CC information was included in the picture ("line 21") via a composite video input, but there is no equivalent capability in digital video interconnects (such as DVI and HDMI) between the display and a "source". A "source", in this case, can be a DVD player or a terrestrial or cable digital television receiver. When CC information is encoded in the MPEG-2 data stream, only the device that decodes the MPEG-2 data (a source) has access to the closed caption information; there is no standard for transmitting the CC information to a display monitor separately. Thus, if there is CC information, the source device needs to overlay the CC information on the picture prior to transmitting to the display over the interconnect's video output.

Many source devices do not have the ability to overlay CC information, for controlling the CC overlay can be complicated. For example, the Motorola DCT-5xxx and -6xxx cable set-top receivers have the ability to decode CC information located on the MPEG-2 stream and overlay it on the picture, but turning CC on and off requires turning off the unit and going into a special setup menu (it is not on the standard configuration menu and it cannot be controlled using the remote). Historically, DVD players, VCRs and set-top tuners did not need to do this overlaying, since they simply passed this information on to the TV, and they are not mandated to perform this overlaying.

Many modern digital television receivers can be directly connected to cables, but often cannot receive scrambled channels that the user is paying for. Thus, the lack of a standard way of sending CC information between components, along with the lack of a mandate to add this information to a picture, results in CC being unavailable to many hard-of-hearing and deaf users.


The EBU Ceefax-based teletext systems are the source for closed captioning signals, thus when teletext is embedded into DVB-T or DVB-S the closed captioning signal is included.[25] However, for DVB-T and DVB-S, it is not necessary for a teletext page signal to also be present (ITV1, for example, does not carry analogue teletext signals on Sky Digital, but does carry the embedded version, accessible from the "Services" menu of the receiver, or more recently by turning them off/on from a mini menu accessible from the "help" button).

New Zealand

In New Zealand, captions use an EBU Ceefax-based teletext system on DVB broadcasts via satellite and cable television with the exception of MediaWorks New Zealand channels who completely switched to DVB RLE subtitles in 2012 on both Freeview satellite and UHF broadcasts, this decision was made based on the TVNZ practice of using this format on only DVB UHF broadcasts (aka Freeview HD). This made composite video connected TVs incapable of decoding the captions on their own. Also, these pre-rendered subtitles use classic caption style opaque backgrounds with an overly large font size and obscure the picture more than the more modern, partially transparent backgrounds.

Digital television standard captioning improvements

The CEA-708 specification provides for dramatically improved captioning

  • An enhanced character set with more accented letters and non-Latin letters, and more special symbols
  • Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions
  • More text and background colors, including both transparent and translucent backgrounds to optionally replace the big black block
  • More text styles, including edged or drop shadowed text rather than the letters on a solid background
  • More text fonts, including monospaced and proportional spaced, serif and sans-serif, and some playful cursive fonts
  • Higher bandwidth, to allow more data per minute of video
  • More language channels, to allow the encoding of more independent caption streams

As of 2009, most closed captioning for digital television environments is done using tools designed for analog captioning (working to the CEA-608 NTSC specification rather than the CEA-708 ATSC specification). The captions are then run through transcoders made by companies like EEG Enterprises or Evertz, which convert the analog Line 21 caption format to the digital format. This means that none of the CEA-708 features are used unless they were also contained in CEA-608.

Uses of captioning in other media

DVDs & Blu-ray Discs

NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format.[26] They are output by the player to the composite video (or an available RF connector) for a connected TV's built-in decoder or a set-top decoder as usual. They can not be output on S-Video or component video outputs due to the lack of a colorburst signal on line 21. (Actually, regardless of this, if the DVD player is in interlaced rather than progressive mode, closed captioning will be displayed on the TV over component video input if the TV captioning is turned on and set to CC1.) When viewed on a personal computer, caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. Windows Media Player (before Windows 7) in Vista supported only closed caption channels 1 and 2 (not 3 or 4). Apple's DVD Player does not have the ability to read and decode Line 21 caption data which are recorded on a DVD made from an over-the-air broadcast. It can display some movie DVD captions.

In addition to Line 21 closed captions, video DVDs may also carry subtitles, which generally rendered from the EIA-608 captions as a bitmap overlay that can be turned on and off via a set top DVD player or DVD player software, just like the textual captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (subtitled for the deaf and Hard of hearing).[27] Many popular Hollywood DVD-Videos can carry both subtitles and closed captions (e.g. Stepmom DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information (even sometimes song lyrics) needed for deaf and hard-of-hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available-English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional information for the deaf and hard-of-hearing audience. (Many deaf/HOH subtitle files on DVDs are reworkings of original teletext subtitle files.)

Blu-ray disc media cannot carry any VBI data such as Line 21 closed captioning due to the design of DVI-based High-Definition Multimedia Interface (HDMI) specifications that was only extended for synchronized digital audio replacing older analog standards, such as VGA, S-Video, component video, and SCART. Both Blu-ray disc and DVD can use either PNG bitmap subtitles or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML-based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio".


There are several competing technologies used to provide captioning for movies in theaters. Cinema captioning falls into the categories of 'open' and 'closed.' The definition of "closed" captioning in this context is different from television, as it refers to any technology that allows as few as one member of the audience to view the captions.

Open captioning in a film theater can be accomplished through burned-in captions, projected text or bitmaps, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign. In a digital theater, open caption display capability is built into the digital projector. Closed caption capability is also available, with the ability for 3rd-party closed caption devices to plug into the digital cinema server.

Probably the best known closed captioning option for film theaters is the Rear Window Captioning System from the National Center for Accessible Media. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an LED display that shows the captions in mirror image. The panel reflects captions for the viewer but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel, and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theater to enable them to work. Proprietary systems also incur license fees.

For film projection systems, Digital Theater Systems, the company behind the DTS surround sound standard, has created a digital captioning device called the DTS-CSS (Cinema Subtitling System). It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a CD that holds many languages. If the Rear Window Captioning System is used, the DTS-CSS player is also required for sending caption text to the Rear Window sign located in the rear of the theater.

Special effort has been made to build accessibility features into digital projection systems (see digital cinema). Through SMPTE, standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually impaired narrative audio, are packaged with the rest of the digital movie. This eliminates the proprietary caption distributions required for film, and the associated royalties. SMPTE has also standardized the communication of closed caption content between the digital cinema server and 3rd-party closed caption systems (the CSP/RPL protocol). As a result, new, competitive closed caption systems for digital cinema are now emerging that will work with any standards-compliant digital cinema server. These newer closed caption devices include cupholder-mounted electronic displays and wireless glasses which display caption text in front of the wearer's eyes.[28] Bridge devices are also available to enable the use of Rear Window systems. As of mid-2010, the remaining challenge to the wide introduction of accessibility in digital cinema is the industry-wide transition to SMPTE DCP, the standardized packaging method for very high quality, secure distribution of digital movies.

Sports venues

Captioning systems have also been adopted by most major league and high-profile college stadiums and arenas, typically through dedicated portions of their main scoreboards or as part of balcony fascia LED boards. These screens display captions of the public address announcer and other spoken content, such as those contained within in-game segments, public service announcements, and lyrics of songs played in-stadium. In some facilities, these systems were added as a result of discrimination lawsuits. Following a lawsuit under the Americans with Disabilities Act, FedExField added caption screens in 2006.[29][30] Some stadiums utilize on-site captioners while others outsource them to external providers who caption remotely.[31][32]

Video games

The infrequent appearance of closed captioning in video games became a problem in the 1990s as games began to commonly feature voice tracks, which in some cases contained information which the player needed in order to know how to progress in the game.[33] Closed captioning of video games is becoming more common. One of the first video game companies to feature closed captioning was Bethesda Softworks in their 1990 release of Hockey League Simulator and The Terminator 2029.[citation needed] Infocom also offered Zork Grand Inquisitor in 1997.[34] Many games since then have at least offered subtitles for spoken dialog during cutscenes, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the Metal Gear Solid series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in many of developer Valve Corporation's video games (such as Half-Life 2 or Left 4 Dead), when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned.

Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either; each game must have its subtitle display programmed individually.

Reid Kimball, a game designer who is hearing impaired, is attempting to educate game developers about closed captioning for games. Reid started the Games[CC] group to closed caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system,[citation needed] writes articles and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show.

Online video streaming

Internet video streaming service YouTube offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), SubRip (*.SRT) or *.SBV file.[35] As a beta feature, the site also added the ability to automatically transcribe and generate captioning on videos, with varying degrees of success based upon the content of the video.[36] The automatic captioning is often inaccurate on videos with background music or exaggerated emotion in speaking. Variations in volume can also result in nonsensical machine-generated captions. Additional problems arise with strong accents, sarcasm, differing contexts, or homonyms.[37]

On June 30, 2010, YouTube announced a new "YouTube Ready" designation for professional caption vendors in the United States.[38] The initial list included twelve companies who passed a caption quality evaluation administered by the Described and Captioned Media Project, have a website and a YouTube channel where customers can learn more about their services and have agreed to post rates for the range of services that they offer for YouTube content.

Flash video also supports captions using the Distribution Exchange profile (DFXP) of W3C timed text format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a web page. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the timed-text exchange profile, Captionate .XML, or SAMI file (e.g. Hulu captioning). This is the preferred method for most US broadcast and cable networks that are mandated by the U.S. Federal Communications Commission to provide captioned on-demand content. The media encoding firms generally use software such as MacCaption to convert EIA-608 captions to this format. The Silverlight Media Framework[39] also includes support for the timed-text exchange profile for both download and adaptive streaming media.

Windows Media Video can support closed captions for both video on demand streaming or live streaming scenarios. Typically Windows Media captions support the SAMI file format but can also carry embedded closed caption data.

QuickTime video supports raw 608 caption data via proprietary closed caption track, which are just EIA-608 byte pairs wrapped in a QuickTime packet container with different IDs for both line 21 fields. These captions can be turned on and off and appear in the same style as TV closed captions, with all the standard formatting (pop-on, roll-up, paint-on), and can be positioned and split anywhere on the video screen. QuickTime closed caption tracks can be viewed in Macintosh or Windows versions of QuickTime Player, iTunes (via QuickTime), iPod Nano, iPod Classic, iPod Touch, iPhone, and iPad.


Live plays can be open captioned by a captioner who displays lines from the script and including non-speech elements on a large display screen near the stage.[40]. Software is also now available that automatically generates the captioning and streams the captioning to individuals sitting in the theater, with that captioning being viewed using heads-up glasses or on a smartphone or computer tablet.


A captioned telephone is a telephone that displays real-time captions of the current conversation. The captions are typically displayed on a screen embedded into the telephone base.

Media monitoring services

In the United States especially, most media monitoring services capture and index closed captioning text from news and public affairs programs, allowing them to search the text for client references. The use of closed captioning for television news monitoring was pioneered by Universal Press Clipping Bureau (Universal Information Services) in 1992,[citation needed] and later in 1993 by Tulsa-based NewsTrak of Oklahoma (later known as Broadcast News of Mid-America, acquired by video news release pioneer Medialink Worldwide Incorporated in 1997).[citation needed] US patent 7,009,657 describes a "method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations" as used by news monitoring services.


Software programs are available that automatically generate a closed-captioning of conversations. Examples of such conversations include discussions in conference rooms, classroom lectures, or religious services.

Non-linear video editing systems and closed captioning

In 2010, Vegas Pro, the professional non-linear editor, was updated to support importing, editing, and delivering CEA-608 closed captions.[41] Vegas Pro 10, released on October 11, 2010, added several enhancements to the closed captioning support. TV-like CEA-608 closed captioning can now be displayed as an overlay when played back in the Preview and Trimmer windows, making it easy to check placement, edits, and timing of CC information. CEA708 style Closed Captioning is automatically created when the CEA-608 data is created. Line 21 closed captioning is now supported, as well as HD-SDI closed captioning capture and print from AJA and Blackmagic Design cards. Line 21 support provides a workflow for existing legacy media. Other improvements include increased support for multiple closed captioning file types, as well as the ability to export closed caption data for DVD Architect, YouTube, RealPlayer, QuickTime, and Windows Media Player.

In mid-2009, Apple released Final Cut Pro version 7 and began support for inserting closed caption data into SD and HD tape masters via firewire and compatible video capture cards.[42] Up until this time, it was not possible for video editors to insert caption data with both CEA-608 and CEA-708 to their tape masters. The typical workflow included first printing the SD or HD video to a tape and sending it to a professional closed caption service company that had a stand-alone closed caption hardware encoder.

This new closed captioning workflow known as e-Captioning involves making a proxy video from the non-linear system to import into a third-party non-linear closed captioning software. Once the closed captioning software project is completed, it must export a closed caption file compatible with the non-linear editing system. In the case of Final Cut Pro 7, three different file formats can be accepted: a .SCC file (Scenarist Closed Caption file) for Standard Definition video, a QuickTime 608 closed caption track (a special 608 coded track in the .mov file wrapper) for standard-definition video, and finally a QuickTime 708 closed caption track (a special 708 coded track in the .mov file wrapper) for high-definition video output.

Alternatively, Matrox video systems devised another mechanism for inserting closed caption data by allowing the video editor to include CEA-608 and CEA-708 in a discrete audio channel on the video editing timeline. This allows real-time preview of the captions while editing and is compatible with Final Cut Pro 6 and 7.[43]

Other non-linear editing systems indirectly support closed captioning only in Standard Definition line-21. Video files on the editing timeline must be composited with a line-21 VBI graphic layer known in the industry as a "blackmovie" with closed caption data.[44] Alternately, video editors working with the DV25 and DV50 firewire workflows must encode their DV .avi or .mov file with VAUX data which includes CEA-608 closed caption data.

The current and most familiar logo for closed captioning consists of two Cs (for "closed captioned") inside a television screen. It was created by WGBH. The other logo, trademarked by the National Captioning Institute, is that of a simple geometric rendering of a television set merged with the tail of a speech balloon; two such versions exist – one with a tail on the left, the other with a tail on the right.[45]

See also


  1. ^ Archived 2013-06-06 at the Wayback Machine 4.7.9
  2. ^ a b c d e "A Brief History of Captioned Television". Archived from the original on 2011-07-19.
  3. ^ "Match of the Day 2: Newcastle subtitle error leaves BBC red-faced". BBC Online. 2 October 2017. Retrieved 2 October 2017.
  4. ^ a b National Captioning Institute Archived July 19, 2011, at the Wayback Machine
  5. ^ Gannon, Jack. 1981. Deaf Heritage-A Narrative History of Deaf America. Silver Spring, MD: National Association of the Deaf, pp. 384-387
  6. ^ "Today on TV", Chicago Daily Herald, March 11, 1980, Section 2-5
  7. ^ "Television Decoder Circuitry Act of 1990".
  8. ^ "FCC Consumer Facts on Closed Captioning".
  9. ^ "Part 79 - Closed Captioning of Video Programming".
  10. ^ "Twenty-First Century Communications and Video Accessibility Act of 2010". 2010. Retrieved 2013-03-28.
  11. ^ "Twenty-First Century Communications and Video Accessibility Act of 2010". 2010. Retrieved 2013-03-28.
  12. ^ "FCC Moves to Upgrade TV Closed Captioning Quality". 2014.
  13. ^
  14. ^ Carl Lamiel (October 14, 2017). "GMA, TV5 now airing shows with closed captioning". YugaTech. Retrieved February 2, 2019.
  15. ^ Alex Varley (June 2008). "Submission to DBCDE's investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. pp. 12, 18, 43. Archived from the original (PDF) on 2009-03-26. Retrieved 2009-02-07.
  16. ^ "About Media Access Australia". Australia: Media Access Australia. Retrieved 2009-02-07.
  17. ^ "About Red Bee Media Australia". Australia: Red Bee Media Australia Pty Limited. Archived from the original on June 13, 2009. Retrieved 2009-02-07.
  18. ^ [1] Ofcom, UK: Television access services Archived June 1, 2010, at the Wayback Machine
  19. ^ Alex Varley (June 2008). "Submission to DBCDE's investigation into Access to Electronic Media for the Hearing and Vision Impaired" (PDF). Australia: Media Access Australia. p. 16. Archived from the original (PDF) on 2008-12-03. Retrieved 2009-01-29. The use of captions and audio description is not limited to deaf and blind people. Captions can be used in situations of "temporary" deafness, such as watching televisions in public areas where the sound has been turned down (commonplace in America and starting to appear more in Australia).
  20. ^ Mayor's Disability Council (May 16, 2008). "Resolution in Support of Board of Supervisors' Ordinance Requiring Activation of Closed Captioning on Televisions in Public Areas". City and County of San Francisco. Archived from the original on January 28, 2009. Retrieved 2009-01-29. that television receivers located in any part of a facility open to the general public have closed captioning activated at all times when the facility is open and the television receiver is in use.
  21. ^ Alex Varley (April 18, 2005). "Settlement Agreement Between The United States And Norwegian American Hospital Under The Americans With Disabilities Act". U.S. Department of Justice. Retrieved 2009-01-29. ...will have closed captioning operating in all public areas where there are televisions with closed captioning; televisions in public areas without built-in closed captioning capability will be replaced with televisions that have such capability
  22. ^ "mb21 - - The Teletext Museum - Timeline".
  23. ^ "Publications" (PDF).
  24. ^ a b c "Archived copy". Archived from the original on 2008-09-01. Retrieved 2008-05-31.CS1 maint: Archived copy as title (link) - ATSC Closed Captioning FAQ (cached copy)
  25. ^ "ETSI EN 300 743: Digital Video Broadcasting (DVB); Subtitling systems"
  26. ^ Jim Taylor. "DVD FAQ". Archived from the original on 2009-08-22.
  27. ^ Jim Taylor. "DVD FAQ". Archived from the original on 2009-08-22.
  28. ^ MKPE Consulting LLC. "Enabling the Disabled in Digital Cinema".
  29. ^ "Redskins Ordered To Continue Captions". Washington Post. October 3, 2008. Retrieved 20 July 2015.
  30. ^ "Fourth Circuit Holds ADA Requires Expanded Access to Aural Content in Stadiums". April 4, 2011.
  31. ^ "Lifeline for hearing-impaired at ballparks". Retrieved 20 July 2015.
  32. ^ "Cards provide captioning for deaf at stadium". The Arizona Republic. Retrieved 20 July 2015.
  33. ^ "Letters". Next Generation. No. 30. Imagine Media. June 1997. p. 133.
  34. ^ Robson, Gary (1998). "Captioning Computer Games".
  35. ^ "Captions".
  36. ^ "Official YouTube Blog: The Future Will Be Captioned: Improving Accessibility on YouTube". Official YouTube Blog.
  37. ^ Nam, Tammy H. "The Sorry State of Closed Captioning". The Atlantic, June 24, 2014. Retrieved December 23, 2015.
  38. ^ "Official YouTube Blog: Professional caption services get "YouTube Ready"". Official YouTube Blog.
  39. ^ "Microsoft Media Platform: Player Framework". CodePlex.
  40. ^ Archived 2007-08-14 at the Wayback Machine
  41. ^ Sony Creative Software (April 2010): the Vegas Pro 9.0d update.
  42. ^ Apple - Final Cut Studio - Whats New Archived 2011-06-08 at the Wayback Machine
  43. ^ CPC Closed Captioning & Subtitling Software for Matrox MXO2 Archived 2010-04-16 at the Wayback Machine
  44. ^ CPC Closed Captioning & Subtitling Software for Non-linear Editors (NLEs) Archived 2010-03-16 at the Wayback Machine
  45. ^ National Captioning Institute Logos Archived 2008-02-15 at the Wayback Machine


External links

This page was last edited on 4 April 2019, at 23:20
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.