Thursday, October 20, 2022

Meta's New AI-Powered Speech Translation System Pioneers a New Approach For Unwritten Languages - Analytics India Magazine - Translation

Listen to this story

Until today, AI translation has primarily focused on written languages. Yet around half of the world’s 7,000+ living languages are mainly oral – without a standard or widely used writing system. Thus, it’s impossible to build machine translation tools using standard techniques requiring large amounts of written text to train an AI model. 

To address this challenge, Meta has built the first AI-powered translation system for a primarily oral language – Hokkien – which is widely spoken within the Chinese diaspora. Meta’s technology allows Hokkien speakers to hold conversations with English speakers as the language lacks a standard written form. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

The open-sourced AI translation system is part of Meta’s Universal Speech Translator (UST) project, which is developing new AI methods that would eventually allow real-time speech-to-speech translation across all extant languages, including primarily spoken ones. The company believes that spoken communication can help break down barriers and bring people together wherever they are – even if located in the metaverse.

Source: Meta 

To develop the new system, Meta’s AI researchers had to overcome many complex challenges from traditional machine translation systems, including model design, data gathering, and evaluation. The blog reads, “We have much work ahead to extend UST to more languages. But the ability to speak effortlessly to people in any language is a long-sought dream, and we’re pleased to be one step closer to achieving it. We’re open-sourcing not just our Hokkien translation models but also the evaluation datasets and research papers, so that others can reproduce and build on our work.”
Moreover, the techniques can be further extended to many other written and unwritten languages. Meta is also releasing SpeechMatrix – a large corpus of speech-to-speech translations – mined with the data mining technique, called LASER. Researchers will then be able to create their own speech-to-speech translation (S2ST) systems and build on the Meta’s work.

Adblock test (Why?)

No comments:

Post a Comment