www.digitaltrends.com
Amazon Prime Video plans to use AI to dub foreign language shows and movies into English and Latin American Spanish. The company has begun a pilot program that uses AI-aided dubbing on 12 licensed movies and series, including titles such as El Cid: La Leyenda, Mi Mam Lora, and Long Lost.Amazon says the pilot uses a hybrid approach to dubbing in which localization professionals collaborate with AI to ensure quality control, and made it clear that it will only use its AI-aided process on content that doesnt already have dubbing support.Recommended VideosI reached out to Amazon to find out if the creators of the 12 pilot movies and series were involved in the process, but I hadnt received a response by the time this article was published.Please enable Javascript to view this contentMany cinephiles believe that watching a dubbed version of a foreign language film or series undermines the art. Since an actors performance is a combination of movement, speech, and emphasis, its important to experience all of it, even if you need subtitles to understand what is said. If AI dubbing could preserve 100% of that performance, while converting it to a different language, it could redefine what it means to watch a dubbed movie.On the other hand, AI dubbing threatens the livelihood of professional voice actors. In 2023, voice actors sounded the alarm via the National Association of Voice Actors (NAVA). It issued advicefor voice actors, telling them never to grant synthesis rights to a client and to contact their union or an attorney if they suspect the contract is trying to take their rights.Among their concerns were that studios might use AI to edit lines of dialogue, in effect getting new performances from actors without bringing them back into the recording studio (or paying them to do so).Amazon isnt the first company to employ AI-based dubbing. In 2023, Spotify debuted a tool based on OpenAIs technology that let it clone the voices of its podcast hosts and dub them into other languages.That technology has continued to improve at a dramatic rate. In 2024, OpenAI boasted that it only needed 15 seconds of sample audio to create an AI clone of someones voice. Just a few months later, Microsoft which has invested heavily in OpenAI revealed that its own state-of-the-art AI voice model VALL-E 2 was too dangerous to release, based on its realism, which sparked fears of misuse.Editors Recommendations