The TLC dating reality series “Love & Translation” premieres this Sunday, Jan. 21 at 10 p.m. ET/9 p.m. CT on the network.
Those without cable can catch premiere of “Love & Translation” for free either on Philo, on DirecTV Stream, or on FuboTV, each of which offer a free trial to new users.
You can watch a trailer for the new series below or by clicking here to watch on TLC’s YouTube channel.
“Watch as three American bachelors travel to paradise where they will be joined by twelve women from nine different countries, who don’t speak any English,” TLC said in a description of the series.
“Without a shared language or the use of a translator, ‘Love & Translation’ explores how singles looking for love come together in the attempt to find a connection,” TLC added.
The premiere episode is titled “You Had Me at Bonjour,” and in a description TLC said “Three American men who only speak English look for love in a seaside villa with twelve international women who do not speak English, testing a social experiment on whether love can break through the language barrier.”
How can I watch TLC’s “Love & Translation” for free if I don’t have cable?
Viewers can stream the premiere through Philo, DirecTV Stream or FuboTV, which all offer a free trial for new users.
What isPhilo?
Philo is an over-the-top internet live TV streaming service that offers 60+ entertainment and lifestyle channels, like AMC, BET, MTV, Comedy Central and more, for the budget-friendly price of $25/month.
What isDirecTV Stream?
The streaming platform offers a plethora of content including streaming the best of live and On Demand, starting with more than 75 live TV channels.
What isFuboTV?
FuboTV is an over-the-top internet live TV streaming service that offers more than 100 channels, such as sports, news, entertainment and local channels.
Machine translation, a crucial aspect of Natural Language Processing, has significantly increased. Yet, a primary challenge persists: producing translations beyond mere adequacy to reach near perfection. Traditional methods, while effective, often need to be improved by their reliance on large datasets and supervised fine-tuning (SFT), leading to limitations in the quality of the output.
Recent developments in the field have brought attention to moderate-sized large language models (LLMs), such as the ALMA models, which have shown promise in machine translation. However, the efficacy of these models is often constrained by the quality of reference data used in training. Researchers have recognized this issue and explored novel training methodologies to enhance translation performance.
Introducing Contrastive Preference Optimization (CPO), a game-changing approach to refining machine translation training. Achieve unparalleled translation accuracy with this groundbreaking technique. This method diverges from traditional supervised fine-tuning by focusing on more than just aligning model outputs with gold-standard references. Instead, CPO trains models to distinguish between just ‘adequate’ and ‘near-perfect’ translations, pushing the translation quality boundaries.
The mechanics of CPO are intriguing. It employs a contrastive learning strategy that utilizes hard negative examples, a significant shift from the usual practice of minimizing cross-entropy loss. This approach allows the model to develop a preference for generating superior translations while learning to reject high-quality but not flawless ones.
The results of implementing CPO have been nothing short of remarkable. The method has demonstrated a substantial leap in translation quality when applied to ALMA models. The enhanced model, referred to as ALMA-R, has showcased performance that matches or surpasses that of the leading models in the field, such as GPT-4. This improvement was achieved with minimal resource investment – a notable achievement in machine translation.
A detailed examination of the ALMA-R model’s performance reveals its superiority over existing methods. It excels in various test datasets, including those from the WMT competitions, setting new translation accuracy and quality standards. These results highlight the potential of CPO as a transformative tool in machine translation, offering a new direction away from traditional training methodologies that rely heavily on extensive datasets.
In conclusion, the introduction of Contrastive Preference Optimization marks a significant advancement in the field of neural machine translation. By focusing on the quality of translations rather than the quantity of training data, this novel methodology paves the way for more efficient and accurate language models. It challenges existing assumptions about machine translation, setting a new benchmark in the field and opening up possibilities for future research and development.
Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community,Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our Telegram Channel
Muhammad Athar Ganaie, a consulting intern at MarktechPost, is a proponet of Efficient Deep Learning, with a focus on Sparse Training. Pursuing an M.Sc. in Electrical Engineering, specializing in Software Engineering, he blends advanced technical knowledge with practical applications. His current endeavor is his thesis on "Improving Efficiency in Deep Reinforcement Learning," showcasing his commitment to enhancing AI's capabilities. Athar's work stands at the intersection "Sparse Training in DNN's" and "Deep Reinforcemnt Learning".
🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...
Making Money panelists Shah Gilani and Scott Martin reveal how Apple can craft excitement in the marketplace and break down the price of oil.
Vasco Electronics previewed its next-generation live voice translation device that is powered by artificial intelligence (AI) and can provide live translations of nearly 50 languages to your ear.
The company, headquartered in Poland, previewed the device at CES 2024 in Las Vegas last week. The Vasco Translator E1 uses earpieces that are connected to a phone app and can translate 49 languages in real-time with an audio translation that someone can hear through the earpiece. The translation also appears as text in the app for convenience.
The Translator E1 can accommodate conversations with up to 10 people when a mobile app is used. The tool allows each user to speak their own language and hear the response in that language. The earbuds fit over the ear, rather than in the ear, for hygienic purposes, and it can be used with two earpieces or one earpiece and a phone.
Tomasz Stomski, Vasco’s chief product officer, told FOX Business at CES that the E1 was designed to be more user-friendly for longer conversations than its Translator V4, which is more useful for a "fast conversation, like if you’re traveling or need to get something done quickly."
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
The Vasco E1 Translator earbud.(Courtesy of Vasco Electronics)
"We decided to go for something that is more natural so you don’t have to hold the device in your hand, you can put it on the table and put on your earbuds and set it to touchless translation mode and have a conversation," Stomski said.
Vasco currently sells the Translator V4, which is a handheld device that resembles a smartphone and provides live translation of conversations and can also be used to translate text from images taken by the user. The V4 can provide speech translation in 76 languages, in addition to photo translations in 108 languages and text translation in 90 languages – though it is most useful for translating shorter conversations.
Stomski said Vasco wants the translation tools to meet the needs of customers who travel regularly, as well as those who are ex-pats, working in international teams or are in families that have language barriers due to relatives being from different countries.
SAMSUNG UNVEILING ‘NEW ERA OF MOBILE AI,’ S24 LINEUP DURING GALAXY UNPACKED EVENT
The Vasco E1 Translator with earbuds in the case.(Courtesy of Vasco Electronics / Fox News)
The Vasco Translator E1 is expected to be available in the U.S. in the second quarter of this year, while it will be available this March in Europe. It’s also compatible with the Translator V4 for users who wish to use both devices.
Stomski explained that AI is used to help identify human voices to filter out background noise that could otherwise interfere with the live translation or the transcript generated by the devices.
"All of the translations there are connections with AI of course, but starting with the earbud – here we’ve got a model that helps us to recognize the human voice because we’re checking what is going into the microphone and we’re checking with the AI model if this is a human voice or this is like a car or a dog," he said.
The Vasco V4 Translator.(Courtesy of Vasco Electronics / Fox News)
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Stomski added that the device uses several models from different providers, in addition to in-house solutions to cross-check speech translations as well as the image-to-text translations that Vasco’s tools can provide.
"We are constantly checking the quality of the translation for every pair because we cannot do it in a common language," he explained. Those translations are sent to Vasco’s cloud, which helps semi-automatically test the roughly 6,000 pairs of languages covered by the devices before those results are checked by humans.
AI translation has transformed the way we communicate, breaking down language barriers in an unprecedented way. The sector’s global market size is projected to reach $12.3bn (€11.3bn) by 2026 — and big and small players alike are aiming to cash in.
Among them, Cologne-based DeepL has been raising industry standards even compared to tech giants like Google Translate and Microsoft Translator.
“We’ve always been rivalling big companies.
The startup was born from online dictionary Linguee and has grown fast since its founding in 2017 by Jarek Kutylowski, a computer scientist who’s also serving as the company’s CEO.
TNW Conference 2024 - Speakers announced!
Meet the powerhouse experts that will take the stage on June 20 & 21 in Amsterdam and save your seat today.
Born in Poland, Kutylowski moved to Germany at the age of 12 where he attended school without speaking a word of German. This made him realise the importance of language and the difficulty in having to communicate in more than one.
When he began working on DeepL in 2017, he saw that neural networks might offer a breakthrough that would enable technology to solve this problem. “We kind of knew that machine translation was going to go in this direction. And we knew this was going to be immensely helpful. Seeing this opportunity, we thought ‘Hey, let’s build something great,’” Kutylowski tells TNW.
Neural Machine Translation (NMT) — as in, the one using neural networks — is the most successful machine translation method we have to date. Compared to its predecessors, it’s faster, more accurate, less resource-intensive, and easier to scale.
DeepL uses the technology to offer free and premium translation services, with special focus on B2B products. It says that, since its inception, over 1 billion users have made requests, and that it currently has more than 20,000 business customers, including the likes of Elsevier, Fujitsu, and Mastodon.
“Translation is really important for businesses,” Kutylowski explains. “Nowadays, companies start going global and expanding into other markets very quickly, so they get customers in different areas.”
He adds that the largest need for translation lies usually in those professions that are text-heavy, such as legal services. “This is where we see most often the strongest demand from our customer base,” Kutylowski says.
To date, DeepL covers 31 languages spanning across Europe and Asia. In 2023, it introduced its AI writing companion and secured unicorn status. Despite the tough funding environment, in January the company raised an undisclosed amount (estimated at €100mn), reaching a €1bn valuation.
The “world’s best” machine translation
DeepL makes the confident claim of offering the “world’s best” AI translation. In addition, it says its product is more nuanced and 3x more accurate compared to those of its competitors.
These assertions are based on “blind tests,” in which professional translators select the most accurate translation without knowing which company produced it.
When I ran a few test experiments of my own, DeepL did indeed come out on top. Firstly, I used a passage from The Stranger by Albert Camus and translated it from French into English, using bothDeepL’s translator and Google Translate.
Although the literary text category isn’t the purpose that these tools have been built for, I decided to begin with it anyway, as it is by default more difficult for an AI system.
That’s because the art of literary translation is complex and requires more than just linguistic proficiency. It involves a high level of creativity, a deep understanding of the author’s voice, style, and socio-historical background, as well as the transfer of meaning across different cultures.
Nevertheless, DeepL’s result was by far superior to Google’s. While it missed some uses of metaphorical language and made a few errors of intent and agreement, the end text did provide a closer meaning to the original.
I repeated the experiment using an article of my own to examine whether the translation tools conveyed the meaning I myself had intended. I translated from English into Greek (my native language).
Below is the DeepL translation:
And here is the Google Translate result:
Again, DeepL did a better job. Despite a couple of minor mistakes, the translation was more nuanced and natural in Greek, while also sticking to the original meaning. But since that’s probably all Greek to you, you don’t have to take my word for it, test it for yourself.
According to Kutylowski, conveying the right meaning to the target language without “butchering” it requires the right balance between accuracy and fluency. This heavily depends on the use case. For example, a technical document calls for higher accuracy, while a marketing text for higher fluency.
Despite this challenge, he believes that AI is capable of learning even the most complex languages. “If there was an alien language that we had to learn, nowadays, with the proper amount of material translated, we could probably work out the translation model for that too,” he adds.
What’s DeepL’s edge?
Kutylowski doesn’t seem concerned about the competition. “We’ve always been rivalling big companies,” he notes, adding that Google Translate remains DeepL’s biggest competitor.
He says that the startup’s edge comes down to a combination of three factors: hard work, a great team, and focus.
“Focus is always an important thing,” he says. “Translate isn’t the core business of Google — it’s one of the 100 side gigs. The same goes if you consider LLMs and the OpenAIs of this world as our competition; translation is only one thing of what they’re doing and their GPU is doing a tonne of different things. We’re focused on one particular area.”
From a technological perspective, DeepL’s success lies in the architecture of its neural networks, the input from human editors, and the training data.
The startup trains its models on tons of data, mostly from the internet, and employs special web crawlers to automatically find translations and assess their quality. It also uses methods such as reinforcement learning to provide positive feedback to the AI so it keeps producing the desired quality.
It’s also about finding the right balance between the capability of the model to translate and its capability to form sentences in the target language, Kutylowski adds. “So a lot of work goes into how much we are training the models on monolingual data and how much on translations. There are a lot of details which the mathematical team is taking care of.”
Machine translation: new challenges and opportunities
Kutylowski acknowledges that the recent boom in fascination with AI — to a great extent because of Large Language Models (LLMs) — has resulted in a more challenging and fast-paced landscape.
“Machine translation is a race now.
DeepL’s team now has to keep up to date with multiple developments: new models coming out, the open-source work that’s happening, academic research, and the work of other companies.
“Machine translation is kind of a race right now,” he says. So what’s a good strategy for competing in this race?
According to Kutylowski, one aspect is to be continuously innovating, and ensuring you’re taking the right steps to enable that. Investing appropriately is another. It also comes to securing the required capital and the right team.
But at the same time, the exponentially increasing interest and advancement in AI also brings new technological opportunities. “There are things that we thought about two or three years ago that technology wasn’t there yet to enable,” he says.
This includes personalised translations that fit a company’s style and a more interactive translation experience. DeepL has also begun research into invoice translation, while it’s training its own LLMs from scratch — in part, thanks to its new supercomputer cluster, DeepL Mercury.
These LLMs will open up opportunities to further improve translation quality and enable new, interactive workflows for users, with more capabilities and applications to be unveiled in 2024.
The future of language learning
Machine translation has had a tremendous impact on overcoming communication barriers. But this also bids the question: will we reach a point where we’ll no longer be learning foreign languages because AI can do it for us?
“With AI advancement in general, I think we as humans will have to ask ourselves the question, what do we need to learn? And what do we want to learn?,” Kutylowski says.
He believes that, when it comes to surviving in a foreign country, the necessity of language learning will gradually decrease as the technology gets better and better. But this doesn’t mean that the value in learning languages will decrease.
To illustrate, he uses maths as an example. While in real life we don’t apply the majority of the complex equations we were taught at school, the process of learning is still vital because it teaches us rational thinking.
The same goes for languages, Kutylowski says. When we learn a language we are also learning how to form thoughts and articulate ideas — and that’s crucial for our development.
The benefits of foreign language learning and bilingualism are indeed far-reaching both for the individual and the society.
Research shows that learning a second language actually changes the brain. Specifically, it increases the density of grey matter (the number of neurons in the brain) as well as the integrity of white matter (the system of nerve fibres that connect the different regions of the brain). This not only strengthens overall brain function, but also enhances memory, attention, concentration, and other cognitive abilities.
In addition, numerous studies have linked language learning to better academic performance, higher employability, improved creativity, as well as communication skills and cross-cultural awareness.
“So for your own pleasure and for the development of your brain and personality, it’s still going to be important to learn languages,” Kutylowski notes. “And even with the best translator on your phone, if you’re marrying a partner who’s from a different country, you’re not going to be communicating through your phone. Or at least, I hope so.”
Making Money panelists Shah Gilani and Scott Martin reveal how Apple can craft excitement in the marketplace and break down the price of oil.
Vasco Electronics previewed its next-generation live voice translation device that is powered by artificial intelligence (AI) and can provide live translations of nearly 50 languages to your ear.
The company, headquartered in Poland, previewed the device at CES 2024 in Las Vegas last week. The Vasco Translator E1 uses earpieces that are connected to a phone app and can translate 49 languages in real-time with an audio translation that someone can hear through the earpiece. The translation also appears as text in the app for convenience.
The Translator E1 can accommodate conversations with up to 10 people when a mobile app is used. The tool allows each user to speak their own language and hear the response in that language. The earbuds fit over the ear, rather than in the ear, for hygienic purposes, and it can be used with two earpieces or one earpiece and a phone.
Tomasz Stomski, Vasco’s chief product officer, told FOX Business at CES that the E1 was designed to be more user-friendly for longer conversations than its Translator V4, which is more useful for a "fast conversation, like if you’re traveling or need to get something done quickly."
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
The Vasco E1 Translator earbud.(Courtesy of Vasco Electronics)
"We decided to go for something that is more natural so you don’t have to hold the device in your hand, you can put it on the table and put on your earbuds and set it to touchless translation mode and have a conversation," Stomski said.
Vasco currently sells the Translator V4, which is a handheld device that resembles a smartphone and provides live translation of conversations and can also be used to translate text from images taken by the user. The V4 can provide speech translation in 76 languages, in addition to photo translations in 108 languages and text translation in 90 languages – though it is most useful for translating shorter conversations.
Stomski said Vasco wants the translation tools to meet the needs of customers who travel regularly, as well as those who are ex-pats, working in international teams or are in families that have language barriers due to relatives being from different countries.
SAMSUNG UNVEILING ‘NEW ERA OF MOBILE AI,’ S24 LINEUP DURING GALAXY UNPACKED EVENT
The Vasco E1 Translator with earbuds in the case.(Courtesy of Vasco Electronics / Fox News)
The Vasco Translator E1 is expected to be available in the U.S. in the second quarter of this year, while it will be available this March in Europe. It’s also compatible with the Translator V4 for users who wish to use both devices.
Stomski explained that AI is used to help identify human voices to filter out background noise that could otherwise interfere with the live translation or the transcript generated by the devices.
"All of the translations there are connections with AI of course, but starting with the earbud – here we’ve got a model that helps us to recognize the human voice because we’re checking what is going into the microphone and we’re checking with the AI model if this is a human voice or this is like a car or a dog," he said.
The Vasco V4 Translator.(Courtesy of Vasco Electronics / Fox News)
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Stomski added that the device uses several models from different providers, in addition to in-house solutions to cross-check speech translations as well as the image-to-text translations that Vasco’s tools can provide.
"We are constantly checking the quality of the translation for every pair because we cannot do it in a common language," he explained. Those translations are sent to Vasco’s cloud, which helps semi-automatically test the roughly 6,000 pairs of languages covered by the devices before those results are checked by humans.