“Jeopardy!” host Ken Jennings got into a Twitter tiff with a viewer who claimed a seemingly correct answer from Friday’s episode was incorrect.
A fan of the trivia show tweeted to the champ, claiming that the answer for the $200 question under the “Potent Potable Rhyme Time” category did not actually rhyme.
The clue provided read: “Rice wine for the guy who rides a racehorse.”
One of the contestants, Kari Elsila, rang in immediately with, “What is ‘sake’ and ‘jockey’?”
The answer was referring to the Japanese alcoholic beverage, which is pronounced to rhyme with “jockey,” according to Merriam-Webster, though the show reportedly uses the Oxford English Dictionary for reference.
That prompted the viewer’s attempt at correction.
“Dear @Jeopardy writers ‘Sake’ and ‘Jockey’ are not rhyming words,” wrote the fan before tagging Jennings, 48, in a separate tweet.
Jennings then clapped back at the viewer.
“I am once again asking Americans to buy a dictionary,” slammed Jennings in his reply, which included photos of both words phonetically spelled out.
Unfortunately, that was not the end of it.
“Love when English changes foreign words, I guess,” responded the Twitter user.
Jennings refused to back down as well.
“Yeah, I’m always mad when people say the ‘s’ in Paris. Shameful,” jeered Jennings.
“Wonder what English would sound like if all our borrowed words were pronounced correctly, actually,” chided the Twitter fan.
Meanwhile, a similar battle raged on the game show’s YouTube page.
“Everybody who doesn’t have an American accent will be immediately irritated by the first clue so transparently not rhyming in any accent without the caught-cot merger,” said one fan.
“Gah! ‘Sake’ does NOT rhyme with ‘jockey,'” one exasperated commenter said. “‘Sake’ is pronounced just as it’s spelled: sa-ke. Sah-keh, phonetically. The ‘e’ in Japanese is like the ‘e’ in the English word ‘let.’ If it rhymed with ‘jockey’ it would be ‘saki.'”
In fact, numerous online sources do indeed suggest that it is pronounced, “sah-keh.”
The Post has reached out to Jennings for comment.
This is not the first time the game show host has been scolded online.
Last month, Jennings was trolled when viewers claimed he “robbed” a contestant of his points after the competitor mispronounced an answer.
“After the Last Supper, Jesus traveled to this garden to pray & was arrested there,” read the $1,600 clue.
Contestant Kevin Manning rang in with the correct answer of the Garden of Gethsemane, which is pronounced, “Geth-SEH-muh-nee.”
However, Manning pronounced the hard “g” sound — like “gate,” which is correct — in the beginning and a “d” sound — rather than an “n” — on the last syllable.
Jennings pronounced the answer incorrectly and moved on to another contestant, who said the location with an “n” sound at the end but also offered a soft “g” — like “gel,” which is incorrect.
“Yeah, we just needed the ‘n’ in Gethsemane — that’s correct,” said Jennings, who also pronounced the name with a soft “g.”
Online viewers were quick to denounce the host.
“Uhhhh @Jeopardy —-Who decided on the correct pronunciation of ‘Gethsemane’?? I need to hear that again,” tweeted one user.
In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.
A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.
In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.
After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”
Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees.
Machine translation has been on the rise since the introduction of neural network techniques, similar to those used in generative artificial intelligence. In 2016, Google launched its first neural machine translation system. Today, when subtitling films for streaming companies or drafting documents for law firms, some of the most established global translation companies use neural machine translation in their workflow in an effort to cut costs and boost productivity. But like the new generation of AI chatbots, machine translation tools are far from perfect, and the errors they introduce can have severe consequences.
Companies working in this space generally recognize the danger of pure automation, and insist that their tools be used only under close human supervision. “Machine-learning translations are not yet in a place to be trusted completely without human review,” said Sara Haj-Hassan, chief operations officer of Tarjimly, a nonprofit startup that connects refugees and asylum seekers with human volunteer translators and interpreters, to Rest of World. “Doing so would be irresponsible and would lead to inequitable opportunities for populations receiving AI translations, since mistranslations could lead to the rejection of cases or other severe consequences.”
The unmet demand, however, is undeniable. Tarjimly, which currently works with over 250 language pairs, saw a fourfold increase in requests for Afghan languages in 2022, according to the organization’s impact report.
Similar concerns have been raised over generative AI tools. OpenAI, the company that makes ChatGPT, updated its user policies in late March with rules that prohibit the use of the AI chatbot in “high-risk government decision-making,” including work related to both migration and asylum.
The stakes for getting translations right can be grave for asylum seekers filling out applications. “One of the things that we see frequently is pointing to small technicalities on asylum applications,” Ariel Koren, the founder of Respond Crisis Translation, told Rest of World. “That’s why you need human attentiveness. The machine, it can be your friend that you use as a helper, but if you’re using that as the ultimate [solution], if that’s where it starts and ends, you’re going to fail this person.”
That is particularly true for work with Afghan refugees who speak Pashto and Dari — languages native to tens of millions of Afghans around the world. The United Nations High Commissioner for Refugees (UNHCR) estimates that over 6 million Afghans were displaced by the end of 2021 alone, including those displaced following the U.S. withdrawal from Afghanistan and the Taliban’s return to power. At the same time, AI language tools for Pashto have lagged behind more dominant languages like English and Mandarin. The latter are considered “high-resource” languages, with a large amount of texts available online compared to a language like Pashto.
It is difficult to say how prevalent machine translation is in the immigration system, but there’s clear evidence it is being used. In 2019, ProPublica reported that U.S. Citizenship and Immigration Services (USCIS) officers were instructed to use Google Translate to vet the social media accounts of asylum applicants. Major translation companies like LanguageLine, TransPerfect, and Lionbridge have contracts with U.S. federal immigration agencies, some totaling millions of dollars. Each of these companies advertises machine translation in its suite of services. Ultimately, it is up to each agency and department whether they opt in or out of these tools in their day-to-day operations.
6 million The estimated number of Afghans displaced by the end of 2021.
UNHCR
At the same time, providers are actively pitching refugee organizations to integrate machine translation into their work. International Refugee Assistance Project (IRAP), a nonprofit that offers legal support to refugees in Afghanistan and Pakistan, received multiple solicitation emails from a for-profit government contractor concerning machine translation.
One of those emails, sent by U.K.-based translation company The Big Word, pitched WordSynk: the company’s signature product, described on its website as “utilising Machine Translation, AI, and translation memory to leverage high-quality, cost-effective outcomes.” IRAP never responded to The Big Word’s sales pitch, but the company lists the U.S. Department of Defense, the U.S. Army, and the U.K. Ministry of Justice among its clients. An internal document, reviewed by Rest of World, lists Pashto and Dari among The Big Word’s “core language” offerings for government customers.
The Big Word did not respond to Rest of World’s request for comment.
Whether automated or not, translation flubs in Pashto and Dari have become commonplace. As recently as early April, the German Embassy to Afghanistan posted a tweet in Pashto decrying the Taliban’s ban on women working. The tweet was quickly ridiculed by native speakers, with some quote tweets claiming that not a single sentence was legible.
“Kindly please don’t insult our language. Thousands [of] Pashtun are living in Germany but still they don’t hire an expert for Pashto,” posted one user, researcher Afzal Zarghoni. The German Embassy later deleted the tweet.
Seemingly trivial translation errors can sometimes lead to harmful distortions when drafting asylum applications.
“[Machine translation] doesn’t have a cultural awareness. Especially if you’re doing things like a personal statement that’s handwritten by someone,” Damian Harris-Hernandez, co-founder of the Refugee Translation Project, told Rest of World. “The person might not be perfect at writing, and also might use metaphors, might use idioms, turns of phrases that if you take literally, don’t make any sense at all.”
Based in New York, the Refugee Translation Project works extensively with Afghan refugees, translating police reports, news clippings, and personal testimonies to bolster claims that asylum seekers have a credible fear of persecution. When machine translation is used to draft these documents, cultural blind spots and failures to understand regional colloquialisms can introduce inaccuracies. These errors can compromise claims in the rigorous review so many Afghan refugees experience.
Dari and Pashto are currently Refugee Translation Project’s most frequently requested languages, according to Harris-Hernandez. Despite the demand, the organization refuses to use automated translation tools, relying exclusively on human translators.
“There’s not really a lot of advantage to [machine translation]. The advantage comes in if you don’t know the language and you’re trying to translate something for a customer,” Harris-Hernandez said, explaining that the incentives look different for his organization compared to many for-profit providers. “The only thing that matters is the money that comes in.”
Muhammed Yaseen, a member of the Afghan team at Respond Crisis Translation, told Rest of World that organizations are banning the use of machine translation for good reason. He claims the machine tools he’s tested are unable to translate certain words, such as the terms for some relatives in Dari dialects, and specialized words like military ranks that can be vital to the asylum applications of former U.S.-allied soldiers.
“If we use machines for Afghans, I think we would be unfair to them,” Yaseen said. “I really feel that if we rely on machines, I [am] expecting at least 40% of our decision making on the asylum applications for refugees would be incorrect.”
Last week the Merriam-Webster dictionary dealt with threats over gender definitions. So this week I interviewed its lexicographer Peter Sokolowski.
Peter: “The pandemic brought new definitions. Like subvariant. Shrinkflation, which is reducing a product’s amount but charging the same price. We don’t retire words. Like we keep President Truman’s snollygoster. Means ‘unscrupulous person.’ ”
The very first dictionary? “Its original was 1604. After Queen Elizabeth’s era. Only one at Shakespeare’s time. Based on Latin and Greek, just 2,400 words that were of the time like ‘microcosm’ and ‘integrity.’”
About today’s him/her/binary lexicon he said: “Huge problem. Slang and street words, informal language, is changing a lot. Frequently written before it’s spoken. Especially identity terms so mistakes get made. Some text abbreviations like LOL make the dictionary because they’re frequent in print.
“Recording our ‘Word of the Day’ podcast I did two minutes on one certain word. A colleague later informed me I’d mispronounced it each time I said it.”
All dolled up
Margot Robbie is now a live “Barbie.” A living doll. Even she was surprised this thing got green lighted.
“My first reaction was ‘It’s so good. Shame they’ll never make this movie.’ But they did.”
Director’s Greta Gerwig. Playing with dolls is Will Ferrell, Issa Rae, Kate McKinnon, America Ferrera, Michael Cera, Rhea Perlman. Ryan Gosling’s one of five Kens.
Listen, don’t ask. Warner’s is toying with July 21 release.
Crown site-ings
Britannia free of the odor of Me-Me-Meghan, is suddenly awash with tourists. More bodies than her Prince Empty has lawyers. All swarming Buckingham Palace to glimpse the King. First coronation in 70 years. Tourists cram Trafalgar Square, Covent Garden, Portobello Road. Hotel rooms, $1,000 a night. More languages than are jumping over US borders. Soon Kamala may even be able to converse with one of them.
Judge dread & co.
NYC progressives were outraged when Justice Alito wrote an embryo had “human rights.” NOT upset when prospective NY Chief Judge Rowan Wilson dissented in 70 pages that Happy the Elephant had “human rights”? And why weren’t women’s rights organizations outraged by his opinion last month overruling a jury verdict and dismissing charges against a convicted rapist? His reasons? The DA was too slow. Didn’t give the rapist a “speedy trial.”
If confirmed, only “progressives” and elephants may be happy.
Pour planning
To those who fled New York for the warmth, friendly, Sunkist, orange-growing outdoor-loving enveloping atmosphere of sunny Florida: Know it poured there all weekend. Big time heavy rain. Seated outdoors in West Palm’s Bradley’s patrons removed their shoes during dinner. Inside was wet as well. Some diners even took off their shoes to walk to drier tables.
Florida matchmaker: “I have a girl for you. It’ll cost $50,000.” Guy: “Can I see her picture?” Matchmaker: “For only $50,000, we don’t show pictures.”
Second visit. Matchmaker: “Truth is she has a few false teeth.” Guy: “Gold?”
Definitely not only in New York, kids, not only in New York.
Eslabon Armado and Peso Pluma are on a hot streak with their collaborative effort “Ella Baila Sola.”
The track hit No. 1 on Billboard‘s Hot Latin Songs chart (on the April 15-dated list) after debuting atop Latin Streaming Songs. Additionally, it earned both acts career-highs on the Billboard Hot 100, becoming the first regional Mexican song to reach the top 10 of the all-genre chart.
Penned by Eslabon’s vocalist Pedro Tovar, “Ella Baila Sola” (she dances alone) tells the story of two friends who are talking about a pretty girl at a party.
Explore
See latest videos, charts and news
See latest videos, charts and news
“We didn’t expect for the song to make so much noise!” Tovar previously told Billboard. “I really liked the song when I first wrote it, but I didn’t really expect it to be such a big hit. I previewed it on my stories on Instagram and two days after it went viral on TikTok and that’s when I knew that the song was going to do big numbers.”
Below, read the complete lyrics translated into English:
Buddy, what do you think of that girl?
The one who’s dancing alone, I like her for me
She, she knows she’s good looking
And everyone is looking at her dance
I get close and tell her a verb
We take drinks without buts, only temptation
I told her “I’m going to conquer your family, and one day you’ll be mine”
She said That I’m too crazy but she likes it
That no guy acts like me
I’m not a guy who has money
But speaking of the heart, I’ll give you everything
She grabbed me by the hand
My buddy didn’t even believe it, it was me when I passed by
Her body
I swear to God it was so perfect
Her waist as a model
Her eyes
I fell in love from the beginning
She likes it and I like it
Large language models, most notably ChatGPT and GPT-4 continue to be the hottest topic in the AI industry, and the language services industry is scrambling to understand what kind of impact the latest generative AI technology will have.
Microsoft researcher Christian Federmann said in a SlatorPod episode on April 12, 2023, that he expects the fervor over generative AI to hit a wall within the next six months as people realize the real-world limitations of the technology (watch the podcast to get his full thoughts), but for now we are firmly entrenched in the hype part of the cycle.
And while there are growing concerns over whether large language models will lead to translation job losses, there is also a lot of interest in how these models perform on translation tasks.
Researchers have started to publish their initial investigations into the translation capabilities of ChatGPT, and this article takes a look at four more papers, all published within the last few weeks, that look at ways to optimize ChatGPT for different translation tasks. Notably, three of the four were published by researchers at major Chinese tech firms.
Building Better Prompts
Researchers at Massey University in New Zealand looked at ways to “unleash the power of ChatGPT” for machine translation through designed prompts. They found that “ChatGPT with designed translation prompts can achieve comparable or better performance over professional translation systems for high-resource language translations”, but lagged significantly on low-resource translations. The designed prompts included additional information such as translation direction and what type of content was being translated.
Report authors also looked at how other “auxiliary data” such as parts of speech tags impacted translation, with mixed results.
In another paper, researchers at Chinese tech firm Tencent’s AI Lab (the paper includes most of the same Tencent researchers that wrote an earlier paper on ChatGPT covered here) looked to develop a new framework for interaction with chat-based LLMs like ChatGPT that would yield better translation results.
Their framework “reformulates translation data into the instruction-following style, and introduces a “Hint” field for incorporating extra requirements to regulate the translation process,” which they say “improves the translation performance of vanilla LLMs significantly.”
Researchers at JD Explore Academy, part of Chinese e-commerce giant JD.com, published a paper that investigated ways to improve ChatGPT’s ability to evaluate translations. They introduce a new way of prompting LLMs and specifically ChatGPT for translation evaluation that they call Error Analysis Prompting that “can generate human-like MT evaluations at both the system and segment level.”
The report authors cited a recent paper from Microsoft researchers Federmann (mentioned above) and Tom Kocmi that found ChatGPT’s ability to assess the quality of machine translation (MT) achieves state-of-the-art performance at the system level but performs poorly at the segment level. JD’s researchers say their method takes this research a step further, with promising results.
“ChatGPT and GPT-4 have demonstrated superior performance and show potential to become a new and promising paradigm for document-level translation” — Tencent AI Lab
Despite the positive results, JD’s researchers also found that ChatGPT had some limitations as an MT evaluator, including giving different scores to the same translation and showing preference for the earliest text in a query when multiple translations were provided.
60-page report on the interaction between human experts and AI in translation production, including AI-enabled workflows, adoption rates, postediting, pricing models.
This paper follows another that JD’s researchers published (with some of the same authors) that looked at ways to improve ChatGPT’s translation outputs.
Finally, Tencent’s AI Lab researchers published another paper, this one investigating how LLMs perform at document-level machine translation and discourse phenomena such as entity consistency, referential expressions, and coherence. The researchers found that ChatGPT outperformed commercial machine translation systems (they tested against Google Translate, DeepL and Tencent’s own TranSmart service) in terms of human evaluation of discourse awareness, though they underperformed against the d-BLEU benchmark.
“ChatGPT and GPT-4 have demonstrated superior performance and show potential to become a new and promising paradigm for document-level translation,” the authors said.
Covenant University has signed a Memorandum of Understanding with OBTranslate to undertake research and development activities in a broad range of areas.
The MoU between the two parties covers machine translation, artificial intelligence, natural language processing, and linguistics for OBTranslate scientists to further develop the “OBTranslate” machine translation and generative AI platform.
This is contained in a joint statement issued on Sunday by the Vice-Chancellor of the university, Prof. Abiodun Adebayo, and the Founder/Chief Executive Officer of OpenBinacle Group, Mr. Emmanuel Gabriel.
OBTranslate, a subsidiary of OpenBinacle Group, is a deep learning company that has developed an online computer assist tool, neural machine translation, and AI platform for over 2,000 African and European languages. It also aimed to bridge language barriers on the African continent and globally.
Prof. Abiodun Adebayo, in the statement said that the constructive partnership is a welcome development as the institution.
The VC added that the university’s departure philosophy and pillars are deeply rooted in Biblical principles and are directed towards effecting change in the recovery process of Nigeria’s education sector and the restoration of the dignity of the blackman.
“Research has been central to Covenant University’s twin missions of offering solutions to critical societal problems and being a leading global educational institution. These ambitions are intimately linked, and their innovations have benefited the country’s health, economy, and political processes and made Covenant increasingly prominent.
Related News
“The University’s current feat as one of Africa’s leading research universities is made possible by its world-class faculty, staff, and postgraduate students who are immersed in innovative and cutting-edge research, including studies in bioinformatics, human genome research, cancer research, renewable energy, IOT-enabled smart and connected communities, biotechnology, as well as leadership, arts, humanities, social sciences, among others,” he said.
According to him, these research activities are coordinated under research clusters and centres of excellence superintendent by the university’s Centre for Research, Innovation, and Discovery
“The university is therefore excited by this partnership with OBTranslate and sees it as another opportunity for an impactful contribution towards expanding the frontier of knowledge in Africa.
As part of its goals, OBTranslate will enable ‘free text’ and document translation, speaker devices, smartphones, and humanoid robots to understand African and European languages.
On his part, Gabriel said the long-term mission of OBTranslate is to identify each of these languages and provide natural language processing that will sustain and preserve our languages from extinction.
According to him, the goal of automatic translation of free-text and documents from foreign languages into African languages (or between any two languages) is one of the oldest pursuits of AI research at OBTranslate.
Furthermore, he said, this collaboration is essential to OBTranslate AI research and the preservation of over 2,000 African languages spoken in all 55 countries in Africa.