The Difference Between Closed Captions and Open Captions

Henni Paulsen
Henni Paulsen
Posted in Subtitles
2 min read
The Difference Between Closed Captions and Open Captions

There is nothing quite like enjoying a good film, series or documentary, and really getting immersed in the story. But what happens when that great piece of art is not in a language you can understand? And what happens when you can see the video but physically can’t hear the audio? That’s where subtitles and closed captions (CC) come in. Both bits of timed on-screen text offer language access and language accessibility, supporting a full viewing experience, but they do so in different ways. In this article we are going to explain how.

First, let’s delve into the not-so-subtle differences between language access and language accessibility. Language access, as the term implies, offers an opportunity for people to hear, read, and understand content in multiple languages. Language access involves translation and spoken language interpreting, and in the entertainment, business, and educational sectors, to name a few, language access is provided via dubbing and subtitles.

Language access centers on overcoming linguistic barriers, enabling communication, and understanding different languages (for example, with subtitles), whereas language accessibility focuses on overcoming sensory barriers, making information and experiences accessible to individuals with disabilities (for example, with closed captions).

Language accessibility may or may not involve language conversion. Closed captions (text that reflects the audio track of a video in the same language spoken on screen) or sign language interpreting are forms of language accessibility that ensure viewers with hearing impairments can fully experience a story told on screen and access auditory information.

Language accessibility also includes audio descriptions for the visually-impaired, which consists of an audio track that describes the scenes between dialogues.

What Goes on Behind the 'Screens'

Large theatrical production companies and global streaming services understand very well the need to make films and series accessible to people who speak other languages and those with hearing impairments. In the case of subtitles, when companies make decisions about which languages and dialects will be included, they have to consider a number of factors, including:

A primary function of subtitles is translating spoken dialogue for viewers who don't understand the original language at all or understand it only partially. The viewer is thus given the choice to turn on subtitles in their preferred language, but viewer choice, which certainly helps increase the reach of a particular film or series, is seldom a consideration when companies decide in which languages subtitles will be available.

The larger the theatrical release, for example, superhero franchises, the more languages will be available in subtitles. These bits of translated and adapted text, called “interlingual subtitles,” offer great advantages for movie and series production companies that want to reach diverse audiences, and there is always a chance that the streaming version of a theatrical release will be available with subtitles in more languages than the box office version.

Another advantage of subtitles is lower cost compared to dubbing. Professional dubbing for large productions is an expensive process, and although it makes sense to have both dubbing and subtitles for major languages (including dominant languages spoken by large segments of the population, such as English, French, Spanish, Chinese, and Arabic) for these types of well-funded content, adding subtitles in many languages ensures a wider distribution for any video production.

Subtitles have enormous long term value as well, a significance that goes beyond enhancing comprehension for viewers in the entertainment realm. There is great educational value in subtitles for those who are learning a new language, and for those who are accessing educational materials in a language in which they are not fluent. There is also great value in subtitling for business, giving diverse audiences a chance to understand your message better.

Although not as common as subtitles for entertainment purposes, subtitles for content creator videos on platforms like YouTube and Vimeo are now more affordable and easier to add than ever thanks to consumer-level applications and automation, including AI-enabled solutions.

Subtitles

How Subtitles are Created and Added

To someone who doesn’t know how subtitles are created, clicking a button on a streaming platform to turn on subtitles makes things look quite simple. But a careful process takes place long before those subtitles can be turned on, be enlarged or viewed in different colors.

Transcription

The first step in the process is transcribing the audio content of the video, along with time stamps and speaker identification. The resulting product is a text format done automatically in most cases thanks to great technology or by a human in certain cases (when speech recognition is not possible due to sound quality or when a language is not supported by the technology). The best transcripts are created when automatic tools and human expertise combine, with automatic transcriptions undergoing full reviews to ensure accuracy. Transcription precision is paramount, as a subtitle translation that truly matches the original content depends directly on the quality and completeness of the transcriptions.

Adaptation for readability

If a viewer does not have enough time to read the text on the screen, the subtitles will not achieve the purpose of providing language access. For that reason, human reviewers use certain parameters, such as characters per line (CPL) and characters per second (CPS) to ensure readability. This happens before translation because adjusting the language for readability will make it much easier to adapt the translated version of languages that expand.

Translation

Translation, the process of written language conversion, is a complex process that requires linguistic expertise and cultural understanding. Translators must comprehend the full context of the dialogue or narration in the transcript to accurately convey meaning. This is not a word for word dialogue conversion, but rather a meaning by meaning translation that ensures the subtitles are linguistically and culturally appropriate for the target audience. Translations can be done in a number of ways, such as by an expert human translator, neural machine translation or AI-enabled translation, but a human should always review the translations to ensure they are accurate and fit the context.

Timing and target language adaptation

Accuracy and fluency in translation are not enough to have great subtitles. Once the translations are completed, they will likely need to be adjusted to ensure timing is preserved. Proper timing involves dividing the translated dialogue into segments that fit on the screen and are displayed for an adequate amount of time. It also involves synchronizing those segments with the audio cues. At this stage, any adaptations needed for readability or to meet target audience specific requirements (such as censored speech) are done by expert human subtitlers aided by technology that shows precise timestamps, CPL, and CPS.

Editing and quality control

Subtitles usually undergo a meticulous editing process to ensure accuracy, clarity, and readability. Editors check for typos, grammatical errors, locale mismatches (when a term does not match the dialect specific to a target audience) and any inconsistencies in timing or formatting. Editors also verify once again that the subtitles are culturally relevant and appropriate for the target audience, and that any additional adaptations for different audiences or specified under contractual obligations are correctly done.

Integration with video

Once multilingual subtitles are finalized, they are integrated with the video file. This can be done in different ways, depending on the desired or required format and delivery platform. Common formats include .srt and .vtt files. This is usually the work of engineers, many of them fluent in the language of the subtitles, The latter, when available, lends an extra layer of quality to the process, since the engineer will be able to check if anything is amiss before subtitles are tested in their final version as on-screen text.

Testing and validation

Subtitled videos should be thoroughly tested on different devices and platforms to ensure they display correctly and are accurately synched with the audio. Ideally, budgets and time permitting, feedback from native speakers of the target languages (to further assess the quality and cultural relevance of the translations, as there are often different perceptions) is collected to provide validation.

Subtitling can be accomplished in a number of ways. A manual method involves a human actually typing the subtitles onto a video using an application. This is a way to add on-screen text that may or may not reflect dialogue. Semi-automated ways to add subtitles can start with an automatic transcript of the video dialogues, followed by the work of a human correcting the automatic transcript and making sure the subtitles fit for timing and readability. There are also fully automated subtitles, such as those created using professional platforms, in which subtitles are machine-translated and placed on videos, after which a human can review and make any necessary corrections.

Both subtitles and closed captions are a great way to reach diverse audiences, but subtitles are a definite advantage for content creators whose success hinges on the number of views and subscribers to a channel.

For marketing localization, although the best campaigns are fully localized, including tailor-made copy and imagery, subtitles are a great way to increase a brand’s reach. In fact, subtitles on marketing media should be considered indispensable for any global brand. Contact with online marketing materials may happen in any place where there is open Internet access, and who knows where the next international opportunity may be!

, Accessibility, and Regulations

Unlike subtitles, which provide a written version of dialogue and narration, closed captions also provide a text transcript of all intralingual audio elements: dialogue, sound effects, background music, and context noises for hearing-impaired viewers and for those who need to read captions for other reasons. This includes improving comprehension in really noisy environments like transportation centers, airports, and sports bars, and silently watching videos on handheld devices in quiet environments when no headphones are available.

Closed captions also include additional information, such as speaker identification and data that might be relevant for the hearing-impaired, such as whether captions have been automatically generated. Providing this information is all about inclusivity, that is, making sure that people with hearing disabilities have access to exactly the same information as everyone else.

In many countries, the availability of closed captions is mandatory by inclusivity and disability laws. Regardless of regulations, providing closed captions is simply good business because it expands viewer reach exponentially. The process for deciding whether to add captions or not is also more straightforward than the process for deciding on subtitle languages precisely because regulations drive their inclusion, whether they are used in a live or a recorded broadcast.

Types of Closed Captions

There are different types of closed captions, and they serve different purposes. These are the most frequently used captions:

Pop-on captions. These captions appear one or two lines at a time on the screen, typically at the bottom (although sometimes the viewer can choose the CC position), and disappear as the audio progresses. They are mostly used in pre-recorded media like full-feature or short films, TV shows, and online videos.

Roll-up captions

These captions roll onto the screen one line at a time, and the top line disappears as a new line appears at the bottom. These captions are usually generated automatically during live broadcasts (news, sports, or events where real-time captioning is required). Roll-up captions tend to have a delay, providing more time for viewers to read each line of text, but this also means that by the time they read a caption, it matches a previously viewed frame.

Paint-on captions

These captions are embedded directly into the video frame, and usually appear as white text on a black background. They are the least used type of captions, usually applied to older videos or those that don’t support other formats (like pop-on captions).

Closed captions with sound effects

These are the captions that also include descriptions of non-speech audio elements like “[thunderclap in the distance]” or “[footfalls].” They are considered part of assistive technology and are becoming increasingly popular in all types of media, as they provide a more complete experience for deaf and hard-of-hearing viewers. These captions are also known as “subtitles for the deaf and hard of hearing” (SDH), and the term is one of the reasons why some people think subtitles and captions are the same.

Verbatim captions

These captions, also known as “true verbatim” or “strict verbatim,” are a type of caption that aims to capture spoken words exactly as they are uttered. This means including all filler words (like “um,” “uh,” “like”), false starts, stutters, repetitions, mispronunciations, and even non-speech sounds (like laughter or coughs). These captions go a step further than SDH because they include all aspects of spoken language, such as pauses and changes in tone. Verbatim captions are primarily used in settings where preserving the exact wording and nuances of speech is crucial, such as during legal proceedings, research interviews, or when documenting historical events. Naturally, verbatim captions take a lot longer to produce than all other types.

Did You Know There Are >Open Captions<, Too?

Open captions, in contrast to closed captions, are a type of captioning where the text is permanently embedded within the video itself. This means that the captions are always visible on the screen and cannot be turned off or hidden by the viewer. Some of these open captions can be seen in many video productions where people speak English with an accent. For that reason, when viewers turn on closed captions, the closed captions appear on top of the open captions, many times not matching.

Open captions are “burned into” the video file, making them an inseparable part of the visual presentation, and typically appear as white text on a black background, and at the bottom of the screen. Because of their graphic quality, some open captions are stylized to match the aesthetic of the video.

Subtitles

These types of embedded captions are also used in settings without captioning control, meaning permanent displays in public spaces like airports, gyms, museums, bus stop displays, etc., where viewers do not have the ability to control caption settings.

Some people argue that open captions are all about universal accessibility because they ensure that everyone, regardless of setting or location, device, video format, and other technology factors can understand the audio content.

On the other hand, open captions can be distracting and impact the viewing experience since they cannot be turned off. Viewers cannot customize the appearance or placement of open captions, so if they want to display subtitles, for instance, they would be looking at two sets of text on screen.

Accessibility, Inclusivity, and Assistive Technology

Overall, accessibility and inclusivity laws and regulations demonstrate a global recognition of the importance and benefits of closed captioning. By mandating captions and establishing quality standards, these mandates try to create a more equitable media landscape for everyone, regardless of their hearing ability.

There are many laws and regulations around the world regarding accessibility for the deaf and hearing-impaired community, but they all share the following:

Accessibility goals

All these laws and regulations were created to ensure equal access to information and services for individuals who are deaf or hard of hearing. The letter and the spirit of these pieces of legislation recognize that closed captions are a crucial tool for enabling these individuals to fully participate in society and enjoy media content.

Mandated captioning

All the main laws require closed captioning for specific types of content, most commonly television programming and online videos, but labor equality laws may also mean adding captions to company online meetings. The requirements vary, but the core principle of providing captions is consistent across different regions.

Quality standards

Many of these laws and regulations also include provisions for caption quality, ensuring that captions are accurate, synchronized with the audio, and complete.

Enforcement mechanisms

Most accessibility laws and regulations have enforcement mechanisms in place, such as fines or penalties for non-compliance. This helps ensure that content creators and distributors adhere to the captioning requirements and prioritize accessibility.

Ongoing evolution

Accessibility laws and regulations are not static. They are constantly evolving to keep up with technological advancements and changing societal needs. This means that content creators and media distributors, for example, must stay informed about the latest requirements to ensure continued compliance and provide the best possible experience for viewers with disabilities.

Some examples of the laws and regulations that include accommodations for people with disabilities are The Americans with Disabilities Act (ADA) in the US, which requires that broadcasters, movie theaters, and online platforms provide effective communication for individuals with disabilities; the US 21st Century Communications and Video Accessibility Act (CVAA) in the US specifically mandates closed captions for television programs shown online if they were previously aired with captions on TV; The Federal Communications Commission (FCC) regulations, also in the US, enforce rules requiring closed captions for television programming.

In Canada, the Canadian Radio-television and Telecommunications Commission (CRTC) regulations mandate closed captioning for most television programming in both English and French. There are also specific rules for audio descriptions (for the visually impaired) and captioning quality. In the United Kingdom, the Equality Act 2010 prohibits discrimination against individuals with disabilities, including those who are deaf or hard of hearing. It implies that service providers must make reasonable adjustments, such as providing closed captions. In Europe, the European Accessibility Act (EAA) was enacted to harmonize accessibility requirements for a wide range of products and services, including audiovisual media. It requires video content on platforms like video-sharing websites and streaming services to have closed captions.

Those are just some examples of regulations around the world regarding content accessibility and closed captions.

Who Uses Subtitles and Closed Captions?

By now you understand the differences and similarities between subtitles and closed captions, and in this section, we are going to examine which industries and professionals create subtitles and closed captions, and for which business purposes.

Subtitles

Film, television, and streaming. These verticals are the first to come to mind when people think about subtitles and closed captions, but as mentioned before, content creators also use them in video sharing platforms. Some of the people who make decisions about, and use subtitles and closed captions in these industries, are post-production supervisors, content localization specialists, content operations managers, and accessibility coordinators.

Gaming industry

Games can be made accessible for deaf and hard-of-hearing players with closed captions. The people usually responsible for adding them are game localization managers and game accessibility specialists.

Education and academia

Standalone online learning platforms as well as those used by university systems use subtitles and closed captions to improve comprehension for non-native speakers of the dominant language, aiding learning for students with different learning styles, and overall making educational content accessible for students with disabilities. In both cases, some of the people in charge of adding subtitles and closed captions are instructional designers, eLearning content developers, media services specialists, and disability services coordinators.

Corporate marketing and advertising

Whether as part of a company division or as independent agencies, marketing and advertising operations are some of the largest users of subtitles and closed captions. Some of the positions charged with ensuring they are properly used are video content producers, social media managers, internal communications managers, corporate communications specialists, human resources managers, and other marketing and communications professionals.

News media

Closed captions make news content accessible and inclusive, and are less expensive than live sign interpreters. There is usually a person dedicated to captions in 24/7 news outlets. Other broadcasting organizations rely on people like audio/video (A/V) producers and digital content managers to ensure captions are available.

Governments

Some governments broadcast plenary sessions and public service announcements with subtitles and closed captions to ensure public information is accessible to all, including those with disabilities. People with titles like public affairs officer, communications director or accessibility coordinator are in charge of making government content accessible.

Non-governmental organizations

These institutions use social media and video platforms to raise awareness about the issues they advocate. Naturally, some of these organizations are global and others operate only at a local level. Those that are global, such as the World Wildlife Fund and Doctors Without Borders use both multilingual subtitles and closed captions, and those at the local level use closed captions. Usually, communications directors and content managers are in charge of adding subtitles and closed captions to content at non-governmental organizations.

The list above is not exhaustive, but it does illustrate the level of diversity among users of subtitles and captions, as well as the global nature of both of these resources. In fact, they are considered essential for video content distribution in all these sectors.

How to get captions for videos?

If you are looking to create captions for videos quickly and cost-effectively, then look no further than Happy Scribe's subtitling platform. Happy Scribe is a cloud-based web-application that automatically generates captions for videos. Moreover, Happy Scribe’s AI Subtitle Generator can great captions for your video in more than 119 languages and accents. Try it here.

Related posts

How to add subtitles

How to Add Subtitles to your Video

André Bastié
André Bastié
Posted in Subtitles
5 min read

Not sure how to add subtitles to a YouTube video? In this article you will find some of the best and easiest ways to add captions to videos.

eea complaince

How To Make Audiovisual Content EAA-Compliant

Henni Paulsen
Henni Paulsen
Posted in Subtitles
4 min read

In this article, you'll learn all about the European Accessibility Act (EAA) and its requirements for making audiovisual content accessible through subtitles and captions. It also explains how automating the subtitling process can save you time and money, improve accessibility, and engage a broader audience. Article written by Henni Paulsen, June 2024.

Web Content Accessibility Guidelines (WCAG): Why They Matter for Subtitling and Transcription

Web Content Accessibility Guidelines (WCAG): Why They Matter for Subtitling and Transcription

Henni Paulsen
Henni Paulsen
Posted in Subtitles
4 min read

Discover why the Web Content Accessibility Guidelines (WCAG) are crucial for subtitling and transcription, helping media companies ensure inclusivity and reach a wider audience. This article explains how WCAG standards support the deaf and hard-of-hearing community with high-quality, accurate subtitles. Learn how following these guidelines can improve your content’s accessibility and create a more inclusive digital experience. Written by Henni Paulsen, June 2024.

Shot changes

How Do Shot Changes Impact Subtitling?

Henni Paulsen
Henni Paulsen
Posted in Subtitles
3 min read

Shot changes - transitions between different camera angles, scenes, or locations - are fundamental to storytelling in video, but also pose unique challenges for subtitling. We’ll dive deep into what shot changes are exactly and why they’re super important for providing top-quality subtitles.

eye tracking

Why Eye Tracking Technology Makes Make Subtitling More Effective

Henni Paulsen
Henni Paulsen
Posted in Subtitles
4 min read

An analysis of eye movements as people read subtitles? That sounds like scientific experimentation, but in fact, researchers are using viewing patterns, or “eye tracking technology,” as a tool to make subtitling more effective. This article provides an overview of how researchers are using this information to better understand what viewers focus on while watching a screen, including text, images, and other visual cues. Researchers then apply this knowledge to improve subtitle placement, formatting, timing and more!

live captioning speaker event

How Live Captioning Improves Accessibility

Henni Paulsen
Henni Paulsen
Posted in Subtitles
4 min read

Live captioning, which is sometimes called real-time transcription, is the process of converting spoken dialogue or narration into text in real time. This can be done during live events or from recorded video.