Tuesday, May 31, 2022

Google's massive language translation work identifies where it goofs up - ZDNet - Translation

c201c7dd-7cfe-44db-8c07-bb63391efdce.png

Scores for languages when translating from English and back to English again, correlated to how many sample sentences the language has. Toward the right side, higher numbers of example sentences result in better scores. There are outliers, such as English in Cyrillic, which has very few examples but translates well. 

Bapna et al., 2022

What do you do after you have collected writing samples for a thousand languages for the purpose of translation, and humans still rate the resulting translations a fail?

Examine the failures, obviously. 

And that is the interesting work that Google machine learning scientists related this month in a massive research paper on multi-lingual translation, "Building Machine Translation Systems for the Next Thousand Languages."

"Despite tremendous progress in low-resource machine translation, the number of languages for which widely-available, general-domain MT systems have been built has been limited to around 100, which is a small fraction of the over 7000+ languages that are spoken in the world today," write lead author Ankur Bapna and colleagues. 

The paper describes a project to create a data set of over a thousand languages, including so-called low-resource languages, those that have very few documents to use as samples for training machine learning. 

Also: DeepMind: Why is AI so good at language? It's something in language itself

While it is easy to collect billions of example sentences for English, and over a hundred million example sentences for Icelandic, for example, the language kalaallisut, spoken by about 56,000 people in Greenland, has fewer than a million, and the Kelantan-Pattani Malay language, spoken by about five million people in Malaysia and Thailand, has fewer than 10,000 example sentences readily available.

To compile a data set for machine translation for such low-resource languages, Bapna and two dozen colleagues first created a tool to scour the Internet and identify texts in low-resource languages. The authors use a number of machine learning techniques to extend a system called LangID, which comprises techniques for identifying whether a Web text belongs to a given language. That is a rather involved process of eliminating false positives. 

After scouring the Web with LangID techniques, the the authors were able to assemble "a dataset with corpora for 1503 low-resource languages, ranging in size from one sentence (Mape) to 83 million sentences (Sabah Malay)." 

The scientists boiled that list down to 1,057 languages "where we recovered more than 25,000 monolingual sentences (before deduplication)," and combined that group of samples with the much larger data for 83 "high-resource languages" such as English. 

Also: AI: The pattern is not in the data, it's in the machine

They then tested their data set by running experiments to translate between the languages in that set. They used various versions of the ubiquitous Transformer neural net for language modeling. In order to test performance of translations, the authors focused on translating to and from English with 38 languages for which they obtained example true translations, including kalaallisut 

That's where the most interesting part comes in. The authors asked human reviewers who are native speakers of low-resource languages to rate the quality of translations for 28 languages on a scale of zero to six, , with 0 being "nonsense or wrong language" and 6 perfect." 

Also: Facebook open sources tower of Babel, Klingon not supported

The results are not great. Out of 28 languages translated from English, 13 were rated below 4 on the scale in terms of quality of translation. That would imply almost half of the English to target translations were mediocre. 

The authors have a fascinating discussion starting on page 23 of what seems to have gone wrong in those translations with weak ratings. 

"The biggest takeaway is that automatic metrics overestimate performance on related dialects," they write, meaning, scores the machine assigns to translations, such as the widely used BLEU score, tend to give credit where the neural network is simply translating into a wrong language that is like another language. For example, "Nigerian Pidgin (pcm), a dialect of English, had very high BLEU and CHRF scores, of around 35 and 60 respectively. However, humans rated the translations very harshly, with a full 20% judged as 'Nonsense/Wrong Language', with trusted native speakers confirming that the translations were unusable."

"What's happening here that the model translates into (a corrupted version of ) the wrong dialect, but it is close enough on a character n-gram level" for the automatic benchmark to score it high, they observe. 

"This is the result of a data pollution problem," they deduce, "since these languages are so close to other much more common languages on the web […] the training data is much more likely to be mixed with either corrupted versions of the higher-resource language, or other varieties."

f73ce2de-4104-4921-a720-29c93f039eb2.png

Examples of translations with correct terms in blue and mistranslations in yellow. The left-hand column shows the code for which language is being translated into, using the standard BCP-47 tags.

Bapna et al., 2022

Also: Google uses MLPerf competition to showcase performance on gigantic version of BERT language model

And then there are what the authors term "characteristic error modes" in translations, such as "translating nouns that occur in distributionally similar contexts in the training data," such as substituting "relatively common nouns like 'tiger' with another kind of animal word, they note, "showing that the model learned the distributional context in which this noun occurs, but was unable to acquire the exact mappings from one language to another with enough detail within this category."

Such a problem occurs with "animal names, colors, and times of day," and "was also an issue with adjectives, but we observed few such errors with verbs. Sometimes, words were translated into sentences that might be considered culturally analogous concepts – for example, translating "cheese and butter" into "curd and yogurt" when translating from Sanskrit."

Also: Google's latest language machine puts emphasis back on language

The authors make an extensive case for working closely with native speakers:

We stress that where possible, it is important to try to build relationships with native speakers and members of these communities, rather than simply interacting with them as crowd-workers at a distance. For this work, the authors reached out to members of as many communities as we could, having conversations with over 100 members of these communities, many of whom were active in this project. 

An appendix offers gratitude to a long list of such native speakers. 

Despite the failures cited, the authors conclude the work has successes of note. In particular, using the LangID approach to scour the web, "we are able to build a multilingual unlabeled text dataset containing over 1 million sentences for more than 200 languages and over 100 thousand sentences in more than 400 languages."

And the work with Transformer models convinces them that "it is possible to build high quality, practical MT models for long-tail languages utilizing the approach described in this work."

Adblock test (Why?)

Google's massive language translation work identifies where it goofs up - ZDNet - Translation

c201c7dd-7cfe-44db-8c07-bb63391efdce.png

Scores for languages when translating from English and back to English again, correlated to how many sample sentences the language has. Toward the right side, higher numbers of example sentences result in better scores. There are outliers, such as English in Cyrillic, which has very few examples but translates well. 

Bapna et al., 2022

What do you do after you have collected writing samples for a thousand languages for the purpose of translation, and humans still rate the resulting translations a fail?

Examine the failures, obviously. 

And that is the interesting work that Google machine learning scientists related this month in a massive research paper on multi-lingual translation, "Building Machine Translation Systems for the Next Thousand Languages."

"Despite tremendous progress in low-resource machine translation, the number of languages for which widely-available, general-domain MT systems have been built has been limited to around 100, which is a small fraction of the over 7000+ languages that are spoken in the world today," write lead author Ankur Bapna and colleagues. 

The paper describes a project to create a data set of over a thousand languages, including so-called low-resource languages, those that have very few documents to use as samples for training machine learning. 

Also: DeepMind: Why is AI so good at language? It's something in language itself

While it is easy to collect billions of example sentences for English, and over a hundred million example sentences for Icelandic, for example, the language kalaallisut, spoken by about 56,000 people in Greenland, has fewer than a million, and the Kelantan-Pattani Malay language, spoken by about five million people in Malaysia and Thailand, has fewer than 10,000 example sentences readily available.

To compile a data set for machine translation for such low-resource languages, Bapna and two dozen colleagues first created a tool to scour the Internet and identify texts in low-resource languages. The authors use a number of machine learning techniques to extend a system called LangID, which comprises techniques for identifying whether a Web text belongs to a given language. That is a rather involved process of eliminating false positives. 

After scouring the Web with LangID techniques, the the authors were able to assemble "a dataset with corpora for 1503 low-resource languages, ranging in size from one sentence (Mape) to 83 million sentences (Sabah Malay)." 

The scientists boiled that list down to 1,057 languages "where we recovered more than 25,000 monolingual sentences (before deduplication)," and combined that group of samples with the much larger data for 83 "high-resource languages" such as English. 

Also: AI: The pattern is not in the data, it's in the machine

They then tested their data set by running experiments to translate between the languages in that set. They used various versions of the ubiquitous Transformer neural net for language modeling. In order to test performance of translations, the authors focused on translating to and from English with 38 languages for which they obtained example true translations, including kalaallisut 

That's where the most interesting part comes in. The authors asked human reviewers who are native speakers of low-resource languages to rate the quality of translations for 28 languages on a scale of zero to six, , with 0 being "nonsense or wrong language" and 6 perfect." 

Also: Facebook open sources tower of Babel, Klingon not supported

The results are not great. Out of 28 languages translated from English, 13 were rated below 4 on the scale in terms of quality of translation. That would imply almost half of the English to target translations were mediocre. 

The authors have a fascinating discussion starting on page 23 of what seems to have gone wrong in those translations with weak ratings. 

"The biggest takeaway is that automatic metrics overestimate performance on related dialects," they write, meaning, scores the machine assigns to translations, such as the widely used BLEU score, tend to give credit where the neural network is simply translating into a wrong language that is like another language. For example, "Nigerian Pidgin (pcm), a dialect of English, had very high BLEU and CHRF scores, of around 35 and 60 respectively. However, humans rated the translations very harshly, with a full 20% judged as 'Nonsense/Wrong Language', with trusted native speakers confirming that the translations were unusable."

"What's happening here that the model translates into (a corrupted version of ) the wrong dialect, but it is close enough on a character n-gram level" for the automatic benchmark to score it high, they observe. 

"This is the result of a data pollution problem," they deduce, "since these languages are so close to other much more common languages on the web […] the training data is much more likely to be mixed with either corrupted versions of the higher-resource language, or other varieties."

f73ce2de-4104-4921-a720-29c93f039eb2.png

Examples of translations with correct terms in blue and mistranslations in yellow. The left-hand column shows the code for which language is being translated into, using the standard BCP-47 tags.

Bapna et al., 2022

Also: Google uses MLPerf competition to showcase performance on gigantic version of BERT language model

And then there are what the authors term "characteristic error modes" in translations, such as "translating nouns that occur in distributionally similar contexts in the training data," such as substituting "relatively common nouns like 'tiger' with another kind of animal word, they note, "showing that the model learned the distributional context in which this noun occurs, but was unable to acquire the exact mappings from one language to another with enough detail within this category."

Such a problem occurs with "animal names, colors, and times of day," and "was also an issue with adjectives, but we observed few such errors with verbs. Sometimes, words were translated into sentences that might be considered culturally analogous concepts – for example, translating "cheese and butter" into "curd and yogurt" when translating from Sanskrit."

Also: Google's latest language machine puts emphasis back on language

The authors make an extensive case for working closely with native speakers:

We stress that where possible, it is important to try to build relationships with native speakers and members of these communities, rather than simply interacting with them as crowd-workers at a distance. For this work, the authors reached out to members of as many communities as we could, having conversations with over 100 members of these communities, many of whom were active in this project. 

An appendix offers gratitude to a long list of such native speakers. 

Despite the failures cited, the authors conclude the work has successes of note. In particular, using the LangID approach to scour the web, "we are able to build a multilingual unlabeled text dataset containing over 1 million sentences for more than 200 languages and over 100 thousand sentences in more than 400 languages."

And the work with Transformer models convinces them that "it is possible to build high quality, practical MT models for long-tail languages utilizing the approach described in this work."

Adblock test (Why?)

The Potential of AI-Based Machine Translation - The Coin Republic - Translation

[unable to retrieve full-text content]

The Potential of AI-Based Machine Translation  The Coin Republic

Rakuten Viki's Volunteer Translators Feel the Pressure - Vulture - Translation

Animation: Vulture

Sheree Miller has pretty much given up on American television. The 50-year-old stay-at-home mom spends about six hours a day sitting in her living room in La Vernia, Texas, in front of a large TV that’s never turned on. Instead, she’s busy working on her laptop as a volunteer for Rakuten Viki, a streaming service that adds subtitles to entertainment produced primarily in Asian countries. For over a decade, Miller has been part of the global community of unpaid users who translate video content on Viki into more than 150 languages. Viki declined to share an exact number, but Miller estimates that there are hundreds of thousands of people participating in the system; the 2015 book The Informal Media Economy put the number at more than 100,000. For their efforts, top contributors get a free Viki subscription. That’s all the compensation Miller says she needs: “The platform itself is for volunteers — those of us in obsession mode.”

Before Viki, overseas fans who wanted to watch subtitled Asian TV shows or films found themselves playing monthslong games of roulette with piracy sites. “It was just horrible, the wait,” Miller recalls. “Sometimes dramas would never get finished, or a website would be obliterated, and then you’d have to go looking around, trying to scrounge up another website to find the same drama.” Viki was born in 2007 as ViiKii.net, a project by three college students at Harvard and Stanford. The site’s name combined the words “video” and “Wiki,” reflecting a desire to translate videos using crowdsourced contributions. The founders didn’t invent the concept of community-powered subtitling — before high-speed internet existed, anime fans were trading “fansubs” on VHS tapes and laserdiscs — but they weren’t shy about wanting Viki to eventually become, according to a blog by co-founder Jiwon Moon in 2008, a new “grand cultural Silk Road.” Viki developed software that allowed multiple people to subtitle a project simultaneously, and in 2013, three years after its official launch, the company was acquired by Japanese e-commerce giant Rakuten for $200 million.

Today, the site once funded by donations from early users like Miller now offers two tiers of paid subscriptions as well as a limited free plan with ads. Much of Viki’s popular content hails from South Korea, including K-dramas What’s Wrong With Secretary Kim and Descendants of the Sun, reality programs like Queendom 2 and Running Man, and annual awards shows for music and acting. The platform also hosts titles from mainland China, Taiwan, and Japan, among several other countries, and since 2015, it has produced its own Viki Originals, including the viral Chinese series Go Go Squid! Viki survived even as major streamers such as Netflix and Amazon bid up the costs of licensing Asian content, which Variety reported was the reason for Warner Bros.’s 2018 shutdown of DramaFever, once Viki’s biggest competitor. By the end of 2021, Viki was serving 53 million users — many of whom want their new subtitled content now.

“Viewers of every language have something in common: They bitch, they complain,” says Connie Meredith, an 80-year-old contributor based in Honolulu, Hawaii. On Viki, an episode of a popular drama is typically translated into English, Viki’s base language for subsequent translations, within 24 to 48 hours of airing. Meredith says volunteers have been able to finish English subtitles as quickly as two to three hours. But less popular shows or shows that have complex dialogue take longer to be completed, and secondary language teams have to wait their turn for the English version. If you scroll through the ratings of any given title on Viki, you’ll see that the worst reviews overwhelmingly reference subtitling speed rather than the actual quality of the show or movie. Did the French translators go on vacation or what? Why am I paying for Viki if the Portuguese subtitles are never done on time? Meredith says reviewers frequently point out when translations are available on other sites before Viki. “And I’m thinking, Sure, you can get it with mistakes,” she says. “We are not in a race.”

In a sense, though, they are. The competition for international programming is only getting fiercer in the streaming-verse — and if Viki’s users aren’t getting the content they want at a speed they deem sufficient, they may look elsewhere. Yet Viki’s strategies to ratchet up subtitle production haven’t always sat well with an unpaid volunteer workforce that has wondered at times what, and who, the company actually values. “In experimenting to see what worked and what didn’t,” says Mariliam Semidey, 37, who served as Viki’s senior manager of customer and community experience from July 2016 to June 2021, “we might’ve made some mistakes.”

The subtitle editor, where Viki’s volunteers adjust the languages they are translating. Photo: Rakuten Viki

Time and time again while Semidey worked there, the question came up during corporate meetings: Should Viki get rid of its volunteers? But management always ultimately agreed that those contributors were too valuable because of the quality of their work. Over the years, Viki’s volunteers have developed their own training programs and standards for capturing cultural references, obscure idioms, and other tricky linguistic nuances. Their self-managed system involves several roles that don’t require fluency in multiple languages, although translation experience is also not necessary to become a subtitler. As a volunteer team lead, Meredith accepts subtitlers who do at least “a halfway decent job” on a short translation test she created. “The rule is don’t guess. If you don’t know the whole thing, skip it, and somebody else will fill it in,” says Meredith, who has taken Korean language courses through the University of Hawaii to improve her vocabulary. She might spend two hours hunting down a specific slang word or reaching out to friends with Ph.D.’s to determine the exact meaning of a phrase. Those efforts are representative of an ethos among the volunteers that has been present since the beginning: the understanding that good translations require time, effort, and teamwork.

“I think a lot of devoted volunteers feel the same way I do,” Meredith adds, “that we wouldn’t be here if it was a paid job because they can’t pay us enough for the time we spend.”

That’s why tension creeps in when volunteers get pushed to complete their contributions faster. Semidey says that when she was at Viki, any hour-long episode that volunteers took longer than eight hours to complete was considered “late” based on the speed of other streaming sites. If a team displayed a pattern of being too slow with their subtitles, Viki would speak up. Miller gets frustrated when Viki staffers reach out to tell her that volunteers should be moving faster. “Most of the people I know who contribute to Viki are not stay-at-home moms,” she says. “They have full-time jobs as nurses and techs and all kinds of things.” There are forms of compensation for contributors, starting with a free subscription to Viki, which is earned when a volunteer finishes 3,000 subtitles or segments (roughly equivalent to seven or eight episodes of work). From there, incentives are given to subtitlers who reach 20,000 contributions, rank high on Viki’s contribution leaderboard, or win volunteer-organized contests — in the past, such rewards have included tote bags, signed Psy CDs, and a Roku 3. Miller sees these perks as nice gestures of recognition. But as long as there’s no payment, it doesn’t sit right with her “to say to somebody that has a full-time job, ‘You better get your heinie here and sub this today at so-and-so time.’”

The dynamic Miller describes has contributed to the controversial efforts Viki has made to increase translation output over the years. In 2018, according to Semidey, Viki began using paid translators to speed up progress on select titles that weren’t expected to be popular, though Semidey says that the company sometimes made incorrect predictions and brought them in on shows that volunteers wanted for themselves. Nonetheless, she says, the decision resulted in a 50 to 60 percent decrease in customer complaints, though volunteers feel that it comes at the cost of quality: Often the subtitles they leave for last, such as the lyrics of official soundtracks (OSTs), require the most intricate translation work. “It’s like the difficulty of subtitling poetry,” says Meredith, who spearheaded the movement to start subbing OSTs on Viki in the first place. She estimates that the overwhelming majority of lines from paid translators “are wrongly subbed,” compelling volunteers to go back and make corrections.

The same goes for content that arrives on Viki with premade segments and subtitles. Beverley Johnson Wong, a 64-year-old contributor based in the Twin Cities, says it takes “at least double or triple the time” to correct errors compared to doing the work from scratch. She specializes in segmenting, which involves cutting videos into the clips that subtitlers fill in. Redoing short segments that cause text to disappear too quickly is tedious and doesn’t count toward contributions for Viki’s leaderboard, yet segmenters still clean up the mistakes. “We don’t want people thinking that was our work,” she explains, “because it was not done well.”

A waveform of the audio that Viki’s segmenters cut into parts. Photo: Rakuten Viki

Volunteer frustrations hit a flash point in September 2019, when Viki introduced a robot into its workforce. That month, hundreds of Viki volunteers went on strike because of the rollout of “Vikibot,” a collaborative project with the Rakuten Institute of Technology that used machine learning to automatically create segments and suggest subtitles. A form of AI called natural language processing also allowed it to learn from Viki’s large library of volunteer translations. The bot had already been implemented on inactive titles for a couple years, but Semidey says it was fed incorrect data and began overwriting existing subtitles in several languages. Outraged volunteers who felt that the machine’s work was subpar translated a strike notice into 12 languages. On Viki’s discussion board, one disgruntled user pointed out the site’s own ban on volunteers using automatic translation tools such as Google Translate: “Can I report Viki if they don’t follow the own guidelines they set?” Another warned, “Remember that we are the reason why Viki exists today as it is.”

Roughly a week later, Viki’s community team held a call to issue an apology and assure volunteers that future use of machine translation would be “as minimal as possible.” But the trust was broken, says Semidey. When Viki proposed a feature the following year that would allow viewers to opt to switch to a separate track of “auto-translated” subtitles, volunteers reacted negatively, stating that the decision would pressure them to rush to prevent viewers from settling for lower-quality translations. While the auto-subs feature was not officially implemented, current volunteers still identify external translations from paid translators and auto-subs as a persistent issue.

Semidey says that working at Viki was a constant struggle to balance the needs of volunteers with the wishes of customers who could threaten to cancel subscriptions. During her time at the company, she remembers having at most three staff members assigned to directly interact with its vast network of volunteers. She also saw several well-known contributors quit over conflicts with Viki staff or other volunteers. When she left in 2021, she felt there were many promises Viki failed to deliver on, including better project-management tools.

“We understand that there are elements that can be improved within our Viki Contributor community, which is why we value their thorough and honest feedback,” Viki said in a statement from current community manager Sean Smith, noting that the company has “enhanced” project management and user-messaging features over time. “We’re actively engaged with a mixture of contributors to further improve tools and workflows.”

Meanwhile, Viki’s overall contributor numbers continue to grow. During the pandemic, volunteers had more free time to join Viki’s pool, leading to 22 percent year-over-year growth in subtitlers globally from 2020 to 2021, according to company data. In 2021, the average monthly contribution by Viki’s community of active subtitlers and segmenters increased by nearly 40 percent. Manuela Ogar Mraovic, who is currently topping the May community leaderboard, had only been subtitling on Viki for a couple months before taking the top spot in March. The retiree from Šibenik, Croatia, hasn’t dropped below second place since, and says she’s “really enjoying every second in this community, working and gaining knowledge.”

Some longtime contributors nevertheless have concerns about the platform’s sustainability, especially as Asian entertainment continues to gain international popularity and attract investment from large corporations like Netflix, Disney+, and Amazon Prime that don’t rely on volunteer-powered systems. One anonymous volunteer who was involved in the 2019 strike says they believe Viki’s volunteer community will inevitably “disappear” in the long term, noting that machine translation software is said to be improving in several key languages. “But we have dozens and dozens of other languages on Viki. That’s why they need us,” the volunteer adds. “For now, we are useful because we are better qualified in all the languages of the world.”

Adblock test (Why?)

The Potential of AI-Based Machine Translation - The Coin Republic - Translation

[unable to retrieve full-text content]

  1. The Potential of AI-Based Machine Translation  The Coin Republic
  2. AI Translation Market 2022: Potential growth, attractive valuation make it is a long-term investment Know the COVID19 Impact – The Greater Binghamton Business Journal  The Greater Binghamton Business Journal
  3. Machine Translation (MT) Market Trend And Forecast| Key Players – Apptek, Asia Online, Cloudwords, Ibm, Lighthouse Ip Group – ManufactureLink  ManufactureLink
  4. AI Translation Market Trend And Forecast| Key Players – Soundai, Mi, Rozetta, Google, Facebook – Industrial IT  Industrial IT
  5. Machine Translation (MT) Market 2022 Driving Factors Forecast Research 2028 – Honyaku Center Inc., Lionbridge Technologies, Inc., CICERON, Jonckers, PROMT Ltd., etc – Xaralite  Xaralite
  6. View Full Coverage on Google News

The Potential of AI-Based Machine Translation - The Coin Republic - Translation

[unable to retrieve full-text content]

The Potential of AI-Based Machine Translation  The Coin Republic

Monday, May 30, 2022

Argo Translation Acquires ICDTranslation to Help More Businesses Break the Language Barrier - EIN News - Translation

Argo Translation, Inc.

A combined customer service powerhouse propels the translation company's mission to create greater understanding in every language.

Understanding is at the heart of everything we do.”

— Peter Argondizzo

CHICAGO, IL, UNITED STATES, May 30, 2022 /EINPresswire.com/ -- Argo Translation, a 28-year-old language services firm based in Chicago, announced today it is closing on the acquisition of ICDTranslation, a Milwaukee-based translation company.

Through the acquisition of ICDTranslation, Argo Translation will serve a greater market by doubling its employee base, adding deep expertise, and bolstering customer service capabilities. This acquisition is part of the Company’s strategic growth plan that supports additional service offerings to a broader customer base.

“Understanding is at the heart of everything we do,” said Peter Argondizzo, who founded Argo Translation with Jackie Lucarelli in 1995, “We’ve always respected ICDTranslation as a company, specifically for its commitment to customer service, which aligns perfectly with our team. Through this acquisition, we are confident that we’ll provide our customers with the best possible outcomes, and we couldn’t be more excited about it.”

Dany Olier, President & Cofounder of ICDTranslation, agrees. ”The combination of our technologies, resources, and innovations will be a huge asset for everyone involved and allow us to strengthen and expand the impact of superior language services. We can’t wait to join the team at Argo Translation.”

Catherine Deschamps-Potter , Vice President & Co-founder of ICDTranslation, also weighed in on the acquisition. “It has been a pleasure working with so many great companies over the years. We know Argo Translation will take great care of our customers.”

Business will continue out of Argo Translation’s headquarters in Glenview, Illinois in the metro-Chicago area. ICDTranslation co-founders Dany Olier and Catherine Deschamps-Potter will consult during the transition. The entire ICD team will continue their work with Argo Translation.

ABOUT ARGO TRANSLATION

Founded in 1995 by Peter Argondizzo and Jackie Lucarelli, Argo Translation delivers cost-effective, culturally significant translation services by combining dedicated project management teams, powerful technology, and teams of linguists around the world. With over 300 million words and 80 languages translated, Argo Translation helps companies expand their audiences, increase engagement and revenue, and achieve organizational success through better understanding and certified quality translation. Learn more about how Argo Translation is breaking the language barrier at argotrans.com.

ABOUT ICD TRANSLATION

ICDTranslation, Inc. is a comprehensive translation agency providing superior multilingual communications to international industries since 1991. With locations in Milwaukee, Denver, and Tampa, ICDTranslation has helped strengthen its clients’ global presence through premier customer service, authentic customer partnerships, and a proud commitment to accuracy, confidentiality, and excellence.

###

Peter Argondizzo
Argo Translation, Inc.
+1 847-901-4070
marketing@argotrans.com
Visit us on social media:
Twitter
LinkedIn

Adblock test (Why?)

Wordly Powering Live Translation for 10,000+ Participants at IMEX Frankfurt Trade Show - Yahoo Finance - Translation

Following 1 Million user milestone announcement, the leading provider of AI-powered interpretation delivers new enhancements making it easy and affordable for event planners to increase conference attendance, engagement, and inclusivity

FRANKFURT, Germany, May 30, 2022 /PRNewswire/ -- Wordly Inc., the leading SaaS provider of AI-powered simultaneous interpretation, today announced several platform enhancements which will make it easier and more affordable for event planners to increase conference attendance, engagement, and inclusivity. These new features coincide with the IMEX Frankfurt event, where Wordly is the official translation provider, powering real-time translation for 10,000+ participants. Attendees will experience a robust and unique translation service used by over 1 million attendees worldwide.

"To further inclusivity and knowledge sharing, the IMEX team chose Wordly to provide scalable AI-powered interpretation for 55 sessions in its four education theaters at IMEX in Frankfurt this year, making translated audio and transcription available instantly in over 20 languages," said Sylvia Taylor, Associate Director Knowledge & Events at the IMEX Group. "Wordly allows us to efficiently deliver inspiring educational content to our show attendees in more languages than ever before."

IMEX Translation
Conference participants can read live captions or listen to live audio for 55 educational sessions on 4 stages in the language of their choice. Attendees can access Wordly in just a few seconds via their mobile device by scanning a QR code or using a browser URL on their laptop.

Platform Enhancements
Wordly is delivering several enhancements to make managing large conferences and events easier. New features include:

  • Bulk Session Manager enables event managers to plan and schedule multiple sessions in an event management platform or a spreadsheet and import them into the Wordly Portal. This saves significant time and makes it easier to manage hundreds of sessions at once.

  • Portal and Attendee App Localization gives event planners and attendees the ability to navigate the Wordly product UX in the language of their choice.

  • Local Transcript Storage helps organizations store transcript files in a specified country to meet compliance and privacy requirements.

  • Transcript Translation provides organizations with the ability to quickly translate transcripts into 20+ languages and make the content available to a wider audience.

"We founded Wordly to increase inclusivity and engagement for all participants regardless of location or language. Our AI-powered interpretation solution makes it easy for any organization to be more inclusive with their events without the high cost of human interpreters," said Lakshman Rathnam, Founder and CEO of Wordly. Building off of the success of IMEX Americas, we are excited to be the official translation provider of IMEX Frankfurt, giving attendees the ability to engage in real-time so that they can get the most out of the conference regardless of their native language."

About Wordly
Wordly provides AI-powered multilingual collaboration solutions for attendees at in-person, virtual, and hybrid meetings and events. With over 1 million users, the Wordly platform provides remote, real-time, simultaneous translation without the use of human interpreters, making it faster, easier, and more affordable to collaborate across multiple languages at once. Wordly empowers organizations to unlock the potential of their multilingual teams and global markets by removing language barriers, increasing inclusivity, engagement, and productivity. Wordly is used by over 500 organizations for a wide range of use cases, including industry conferences, customer webinars, sales kickoff meetings, partner training, employee onboarding, and much more. For more information, visit www.wordly.ai.

Cision
Cision

View original content:https://ift.tt/vAlhsuV

SOURCE Wordly

Adblock test (Why?)

What Is the Role of Translators and Translation Technology in Crisis Settings? - Slator - Translation

When a global crisis strikes, help must arrive fast — which usually demands coordination and information flow among people who may not speak the same language. In crisis settings, people need immediate access to crucial information.

Therefore, translators and interpreters can play a critical role in supporting the activities of responders involved in crisis communication scenarios. The Covid-19 crisis, for instance, has shown how essential it is for people to have access to information in a language they understand. As a result, a considerable body of research investigating translation as a crisis communication tool is currently emerging.

Crisis Translation

In her recent article, Crisis Translation: A snapshot in time, Sharon O’Brien, Associate Dean for Research at the Faculty of Humanities and Social Sciences in Dublin City University, defines crisis (or disaster) as “an unexpected event, with sudden or rapid onset that can seriously disrupt the routines of an individual or a collective and that poses some level of risk or danger.”

Together with Federico Federici, Professor of Intercultural Crisis Communication at University College London, they have started to examine the need for and use of translation and interpreting in crisis response, as well as the role of translation as a risk reduction tool in the disaster management cycle. That is, the ongoing process by which governments, businesses, and civil society plan for and reduce the impact of disasters, react immediately following and during a disaster, and take steps to recover in the aftermath.

Since crisis communication as a field is well established, it made sense to build on it to create the parallel term “crisis translation” (i.e., any form of linguistic and cultural transmission of messages that enables access to information during an emergency, regardless of medium).

The fundamental premise underlying the concept of crisis translation remains the same: In today’s age of globalization, increased urbanization, and migration, communication before, during, and after a crisis must be multilingual and multicultural. That communication is enabled through translation and interpreting.

Training for Citizen Translators

As O’Brien put it, “professional translators and interpreters are an asset in crisis communication.” But will there be an adequate supply of this asset during a crisis?

Translation and interpreting are not established equally around the world, and translators and interpreters may also be affected by a crisis and, thus, temporarily unable to provide their typical level of service. “When people are faced with a crisis, the luxury of a trained professional is often just that – an unattainable luxury,” observed O’Brien.

In a crisis situation, a translator might be “any person who can mediate between two or more language and culture systems, without specific training or qualifications,” according to Federici. Hence, volunteerism is viewed as a “legitimate way in which people can participate in the activities of their community” — and, as such, deserves recognition and respect.

Such volunteers, though, may not have any formal translation training. It is for this reason that a group has produced materials to help train “citizen translators.” The group is called INTERACT (International Network in Crisis Translation), a project funded by the European Union’s Horizon 2020 research and innovation program.

In addition, the INTERACT team co-developed a master’s level module for translation studies at the University of Auckland, University College London, and Dublin City University on the topic of crisis translation. The aim of these ongoing modules is to enable students of translation studies to develop a skill set in support of multilingual crisis settings.

Machine Translation in Crisis Response

Machine translation (MT) might be regarded as the most appropriate technology for crisis response given the speed of production it enables and its availability online in an expanding number of languages.

“When translation is required at speed, MT is, on the surface, the most logical tool,” O’Brien pointed out. The most recent evidence of the use of MT in crisis response was the rapid development of MT engines to help Ukrainian citizens.

The use of MT in crisis translation, however, has technical, operational, and ethical limits according to a 2020 study. Given that MT is not yet a perfect technology, its use for communication in crisis settings may be highly problematic. Getting the message wrong in crisis communication can have serious implications.

According to the same study, aside from the quality problem, some issues that need to be addressed are

  • the lack of big linguistic data to build translation engines
  • the lack of coverage for languages that may be required in crisis response
  • the lack of domain-specific engines that cover crisis content
  • the need for power and infrastructure to run the technology
  • the lack of linguistic expertise to edit output

The MT R&D community continues to tackle these challenges and is looking for ways to improve language coverage for low-resource languages, among other things.

Another significant hurdle: those involved in emergency response may be unaware about the pitfalls of MT technology. Using a free online tool may seem like an easy decision when saving lives is the priority and resources are strained. The need for training in basic MT literacy is clear.

Adblock test (Why?)

Sunday, May 29, 2022

Wordly Powering Live Translation for 10,000+ Participants at IMEX Frankfurt Trade Show - PR Newswire - Translation

Following 1 Million user milestone announcement, the leading provider of AI-powered interpretation delivers new enhancements making it easy and affordable for event planners to increase conference attendance, engagement, and inclusivity

FRANKFURT, Germany, May 30, 2022 /PRNewswire/ -- Wordly Inc., the leading SaaS provider of AI-powered simultaneous interpretation, today announced several platform enhancements which will make it easier and more affordable for event planners to increase conference attendance, engagement, and inclusivity. These new features coincide with the IMEX Frankfurt event, where Wordly is the official translation provider, powering real-time translation for 10,000+ participants. Attendees will experience a robust and unique translation service used by over 1 million attendees worldwide.

"To further inclusivity and knowledge sharing, the IMEX team chose Wordly to provide scalable AI-powered interpretation for 55 sessions in its four education theaters at IMEX in Frankfurt this year, making translated audio and transcription available instantly in over 20 languages," said Sylvia Taylor, Associate Director Knowledge & Events at the IMEX Group. "Wordly allows us to efficiently deliver inspiring educational content to our show attendees in more languages than ever before."

IMEX Translation
Conference participants can read live captions or listen to live audio for 55 educational sessions on 4 stages in the language of their choice. Attendees can access Wordly in just a few seconds via their mobile device by scanning a QR code or using a browser URL on their laptop.

Platform Enhancements
Wordly is delivering several enhancements to make managing large conferences and events easier. New features include:

  • Bulk Session Manager enables event managers to plan and schedule multiple sessions in an event management platform or a spreadsheet and import them into the Wordly Portal. This saves significant time and makes it easier to manage hundreds of sessions at once.
  • Portal and Attendee App Localization gives event planners and attendees the ability to navigate the Wordly product UX in the language of their choice.
  • Local Transcript Storage helps organizations store transcript files in a specified country to meet compliance and privacy requirements.
  • Transcript Translation provides organizations with the ability to quickly translate transcripts into 20+ languages and make the content available to a wider audience.

"We founded Wordly to increase inclusivity and engagement for all participants regardless of location or language. Our AI-powered interpretation solution makes it easy for any organization to be more inclusive with their events without the high cost of human interpreters," said Lakshman Rathnam, Founder and CEO of Wordly. Building off of the success of IMEX Americas, we are excited to be the official translation provider of IMEX Frankfurt, giving attendees the ability to engage in real-time so that they can get the most out of the conference regardless of their native language."

About Wordly
Wordly provides AI-powered multilingual collaboration solutions for attendees at in-person, virtual, and hybrid meetings and events. With over 1 million users, the Wordly platform provides remote, real-time, simultaneous translation without the use of human interpreters, making it faster, easier, and more affordable to collaborate across multiple languages at once. Wordly empowers organizations to unlock the potential of their multilingual teams and global markets by removing language barriers, increasing inclusivity, engagement, and productivity. Wordly is used by over 500 organizations for a wide range of use cases, including industry conferences, customer webinars, sales kickoff meetings, partner training, employee onboarding, and much more. For more information, visit www.wordly.ai.

SOURCE Wordly

Adblock test (Why?)

Saturday, May 28, 2022

Dictionary donation - Northside Sun - Dictionary

The Rotary Club of North Jackson recently delivered dictionaries to the Mississippi Children’s Museum for their education program. Shown are (from left) Monique Ealey, director of education and programs; Cynthia Till, assistant director of guest experiences; Greg Campbell, Rotary Club of North Jackson past president and project chair; and Lindsey Harris, director of development.

Adblock test (Why?)

Southold finds new solution to translation issues at Town Hall - The Suffolk Times - Suffolk Times - Translation

Following a recent request for better translation services at Southold Town Hall, Town Supervisor Scott Russell announced Tuesday that the town has partnered with a company that will provide language assistance in all town departments via telephone.

LanguageLine Solutions, based in Monterey, Calif., offers translation of over 240 languages. 

Mr. Russell worked with Justice Court director Leanne Reilly on establishing a subscription for the town to use. Training has already been conducted for town department heads.

The town will soon post signs written in some of the more commonly used languages at Town Hall that provide instructions for non-English speakers to call the service for help with translation.

“Once we get the signs up, someone who speaks a certain language will be able to read the sign and know what they would need to do to communicate with the town employee,” Mr. Russell said.

Adblock test (Why?)

Friday, May 27, 2022

Translation, the new (benign) colonialism - Economic Times - Translation

Apart from being a language, English is also the most politically empowered language. The latest demonstration of this fact is the far wider recognition Geetanjali Shree has gained as a writer after being awarded this year's International Booker Prize. This prize, a companion award to the more popular Booker Prize given to works of fiction written only in English, is given to works in non-English languages translated into English. What makes it valuable, apart from the reach the host language (English) brings to the work written in the guest language (in this case, Hindi), is the push (read: branding) the prize itself bestows. Remember, the translation has to be published in Britain or Ireland to qualify. Shree's 2018 novel Ret Samadhi, translated into English by (American) Daisy Rockwell and published in 2021 as Tomb of Sand, brings the book - with the stamp of 'Booker aesthetics' excellence - to a non-Hindi-reading readership that includes Indian readers.

For English to wield the kind of 'visa-less' access it has today, it required a consistent overt and covert political push over centuries. Britain's colonial enterprise, of course, was a primal source and fuel of that longue duree proliferation. By the time the language of the British Empire was unhitched from the language itself, it had welded itself across the world both as lingua franca and high language of choice, with its regional manifestations (British English, American English, Indian English, etc). A language's reach is determined by its source users' economic and political heft. So, for a language to thrive, both in stature and reach, beyond its native terrain, the source society must have the commensurate heft.

With colonisation, thankfully, no longer an option, it is translation that now holds the key. Translating quality contemporary literature in Indian languages into other Indian languages, English, Mandarin, Spanish, etc - languages whose readers are appreciative of literature - can bring the soft power and prestige that native language speakers crave for.


Adblock test (Why?)

Dharamsala: Tibetan dictionary released - The Tribune India - Dictionary

[unable to retrieve full-text content]

Dharamsala: Tibetan dictionary released  The Tribune India

Taltan Dictionary Project aims to preserve endangered Indigenous language - CFTKTV - Dictionary

[unable to retrieve full-text content]

Taltan Dictionary Project aims to preserve endangered Indigenous language  CFTKTV

Thursday, May 26, 2022

Translate scanned PDF documents with Document translation - Microsoft - Translation

Today, the Document translation feature of Translator, a Microsoft Azure Cognitive Service, adds the ability to translate PDF documents containing scanned image content, eliminating the need for customers to preprocess them through an OCR engine before translation.

Document translation was made generally available last year, May 25, 2021, allowing customers to translate entire documents and batches of documents into more than 110 languages and dialects while preserving the layout and formatting of the original file. Document translation supports a variety of file types, including Word, PowerPoint and PDF, and customers can use either pre-built or custom machine translation models. Document translation is enterprise-ready with Azure Active Directory authentication, providing secured access between the service and storage through Managed Identity.

Translating PDFs with scanned image content is a highly requested feature from Document translation customers. Customers find it difficult to segregate PDF documents which have regular text or scanned image content through automation. This creates workflow issues as customers have to route PDF documents with scanned image content first to an OCR engine before sending them to document translation.

Document translation services now have the intelligence

  • to identify whether the PDF document contains scanned image content or not,
  • to route PDFs containing scanned image content to an OCR engine internally to extract text,
  • to reconstruct the translated content as regular text PDF while retaining the original layout and structure.

Font formatting like bold, italics, underline, highlights, etc. are not retained for scanned PDF content as OCR technology does not currently capture them. However, font formatting is preserved while translating regular text PDF documents.

Document translation currently supports PDF documents containing scanned image content from 68 source languages into 87 target languages. Support for additional source and target languages will be added in due course.

Now it’s easier for customers to send all PDF documents to Document translation directly and let it decide when and how to use the OCR engine efficiently.

For customers already using Document translation, no code change is required to be able to use this new feature. PDF documents with scanned content can be submitted for translation like any other supported document formats.

We are also pleased to announce that the Document translation adds support for scanned PDF document content with no additional charges to customers. Two pricing plans are available for Document translation through Azure — the Pay-as-you-go plan and the D3 volume discount plan for higher volumes of document translation. Pricing details can be found at aka.ms/TranslatorPricing.

Learn how to get started with Document translation at aka.ms/DocumentTranslationDocs.
Send your feedback to mtfb@microsoft.com.

Adblock test (Why?)

AppTek Achieves Top Ranking at the International Workshop in Spoken Language Translation's (IWSLT) 2022 Evaluation Campaign - PR Newswire - Translation

Company's Spoken Language Translation System Ranks First in Isometric Speech Translation Track Which Is Critical in Improving Automatic Dubbing and Subtitling Workflows

MCLEAN, Va., May 26, 2022 /PRNewswire/ -- AppTek, a leader in Artificial Intelligence (AI), Machine Learning (ML), Automatic Speech Recognition (ASR), Neural Machine Translation (NMT), Text-to-Speech (TTS) and Natural Language Processing / Understanding (NLP/U) technologies, announced that its spoken language translation (SLT) system ranked first in the isometric speech translation track at the 19th annual International Workshop on Spoken Language Translation's (IWSLT 2022) evaluation campaign.

Isometric translation is a new research area in machine translation that concerns the task of generating translations similar in length to the source input and is particularly relevant to downstream applications such as subtitling and automatic dubbing, as well as the translation of some forms of text that require constraints in terms of length such as in software and gaming applications.   

"We are thrilled with the results of the track," said Evgeny Matusov, Lead Science Architect, Machine Translation, at AppTek. "This is a testament to the hard work and skill of our team, who have been focusing on developing customized solutions for the media and entertainment vertical."

AppTek entered the competition to measure the performance of its isometric SLT system against other leading platforms developed by corporate and academic science teams around the world.  Participants were asked to create translations of YouTube video transcriptions such that the length of the translation stays within 10% of the length of the original transcription, measured in terms of characters. AppTek participated in the constrained task for the English-German language pair, which is the one out of the three pairs evaluated at IWSLT with the highest target-to-source length ratio in terms of characters count.

Submissions were evaluated on two dimensions – translation quality and length compliance with respect to source input. Both automatic and human assessment found the AppTek translations to outperform competing submissions in terms of quality and the desired length, with performance matching "unconstrained" systems trained on significantly more data. An additional evaluation performed by the task organizers showed that creating synthetic speech from AppTek's system output leads to automatically dubbed videos with a smooth speaking rate and of higher perceived quality than when using the competing systems.

"The superior performance of AppTek's isometric speech translation system is another step towards delivering the next generation of speech-enabled technologies for the broadcast and media markets", said Kyle Maddock, AppTek's SVP of Marketing. "We are committed to delivering the state-of-the-art for demanding markets such as media and entertainment, and isometric translation is a key component for more accurate automatic subtitling and dubbing workflows."

AppTek scientists Patrick Wilken and Evgeny Matusov will present the details of AppTek's submission at this year's IWSLT conference held in Dublin on May 26-27, 2022.

The full IWSLT 2022 results can be found here.

About AppTek
AppTek is a global leader in artificial intelligence (AI) and machine learning (ML) technologies for automatic speech recognition (ASR), neural machine translation (NMT), natural language processing/understanding (NLP/U) and text-to-speech (TTS) technologies.  The AppTek platform delivers industry-leading, real-time streaming and batch technology solutions in the cloud or on-premises for organizations across a breadth of global markets such as media and entertainment, call centers, government, enterprise business, and more. Built by scientists and research engineers who are recognized among the best in the world, AppTek's multidimensional 4D for HLT (human language technology) solutions with slice and dice methodology covering hundreds of languages/dialects, domains, channels and demographics drive high impact results with speed and precision.  For more information, please visit http://www.apptek.com.

Media Contact:
Kyle Maddock
202-413-8654
[email protected]

SOURCE AppTek

Adblock test (Why?)

Wednesday, May 25, 2022

Lewis County Elks Dictionary Project - Lewis Herald - Dictionary

[unable to retrieve full-text content]

Lewis County Elks Dictionary Project  Lewis Herald

Translate scanned PDF documents with Document translation - Microsoft - Translation

Today, the Document translation feature of Translator, a Microsoft Azure Cognitive Service, adds the ability to translate PDF documents containing scanned image content, eliminating the need for customers to preprocess them through an OCR engine before translation.

Document translation was made generally available last year, May 25, 2021, allowing customers to translate entire documents and batches of documents into more than 110 languages and dialects while preserving the layout and formatting of the original file. Document translation supports a variety of file types, including Word, PowerPoint and PDF, and customers can use either pre-built or custom machine translation models. Document translation is enterprise-ready with Azure Active Directory authentication, providing secured access between the service and storage through Managed Identity.

Translating PDFs with scanned image content is a highly requested feature from Document translation customers. Customers find it difficult to segregate PDF documents which have regular text or scanned image content through automation. This creates workflow issues as customers have to route PDF documents with scanned image content first to an OCR engine before sending them to document translation.

Document translation services now have the intelligence

  • to identify whether the PDF document contains scanned image content or not,
  • to route PDFs containing scanned image content to an OCR engine internally to extract text,
  • to reconstruct the translated content as regular text PDF while retaining the original layout and structure.

Font formatting like bold, italics, underline, highlights, etc. are not retained for scanned PDF content as OCR technology does not currently capture them. However, font formatting is preserved while translating regular text PDF documents.

Document translation currently supports PDF documents containing scanned image content from 68 source languages into 87 target languages. Support for additional source and target languages will be added in due course.

Now it’s easier for customers to send all PDF documents to Document translation directly and let it decide when and how to use the OCR engine efficiently.

For customers already using Document translation, no code change is required to be able to use this new feature. PDF documents with scanned content can be submitted for translation like any other supported document formats.

We are also pleased to announce that the Document translation adds support for scanned PDF document content with no additional charges to customers. Two pricing plans are available for Document translation through Azure — the Pay-as-you-go plan and the D3 volume discount plan for higher volumes of document translation. Pricing details can be found at aka.ms/TranslatorPricing.

Learn how to get started with Document translation at aka.ms/DocumentTranslationDocs.
Send your feedback to mtfb@microsoft.com.

Adblock test (Why?)

Tuesday, May 24, 2022

Research translation, innovation updates top BOT's May meeting - The Well : The Well - The Well - Translation

News about University research — how it’s being done and how it’s applied to solve problems — dominated the Board of Trustees meeting May 18-19.

The Office of Undergraduate Research showed how important undergraduates are in making new discoveries in its May 19 presentation. Gabriella Hesse ’22, now a School of Medicine student, presented her research about sex-related tendencies in the development of certain brain diseases. Lauren McShea, a rising sophomore majoring in environmental health sciences, showed how she helped develop low-cost ways to monitor well water for harmful bacteria.

Because so many Carolina faculty are involved in research, “our undergraduates get to be in proximity to that, to take part in that,” said Troy Blackburn, associate dean for undergraduate research. “It’s learning by doing. It’s taking classroom knowledge and using it to solve problems.”

With the full implementation of the new IDEAs in Action curriculum this fall, about 19,000 undergraduates will be required to engage in original research to meet the new research and discovery requirement, Blackburn said.

Student Gabriella Hesse and teaching associate professor Sabrina Robertson

Student Gabriella Hesse, left, and teaching associate professor Sabrina Robertson share research done on Parkinson’s disease. (Jon Gardiner/UNC-Chapel Hill)

Next steps in research

Provost and Chief Academic Officer J. Christopher Clemens spoke about helping researchers develop their work when asking that the Institute for Convergent Science, based in the College of Arts & Sciences since 2017, become a pan-University, interdisciplinary center.

“Our faculty are very good at sponsored research,” Clemens, the institute’s founding director, told the University Affairs committee. But basic research skills are very different from those required for building a company. “We see ICS as a bridge that helps faculty navigate the pathways they must go through if they’re going to take research from the lab and out into the world.”

The institute, located in the Genome Sciences Building, operates in a three-lane research-to-market process called “Ready, Set, Go.” The middle lane is the newest to the University. “It’s called pre-commercial development,” Clemens said. “It awards money based on proposals,” allowing researchers to continue to develop their ideas without having to take entrepreneurial risks.

“It needed to be elevated. It will help us recruit faculty who will grow the research infrastructure and support other initiatives,” said Chancellor Kevin M. Guskiewicz.

The board approved the institute, which will be led by Gregory P. Copenhaver, Chancellor’s Eminent Professor of Convergent Science and associate dean of research and innovation in the College of Arts & Sciences.

Vinay Patel

Trustee Vinay B. Patel called the proposed downtown Innovation District “very exciting news, not just for the University but for the entire region.” (Jon Gardiner/UNC-Chapel Hill)

On to innovation and commercialization

Researchers who are ready to be entrepreneurs can call upon the many resources of Innovate Carolina.

“The gap between research and discovery and impact is wide, long, resource-intensive and risky. And this is the place where Innovate Carolina sits,” Michelle Bolas, the University’s chief innovation officer and executive director of Innovate Carolina, told the Strategic Initiatives committee.

One of the department’s recent successes is approval of the 20,000-square-foot Innovation Hub at 136 E. Rosemary St. The downtown Chapel Hill space is being renovated as a new home for Innovate Carolina and a startup accelerator, with co-working and meeting spaces.

The Innovation Hub, scheduled to open in April 2023, and the redevelopment of Porthole Alley will be key components of a proposed downtown Innovation District. “We will be one of the only leading universities with an Innovation District at the edge of our campus, on our front door,” Bolas said.

Trustee Vinay B. Patel called the development “very exciting news, not just for the University but for the entire region,” when he presented an update to the full board.

Board of Trustees Chair David L. Boliek Jr. responded that the new district shows “this board’s commitment to economic development and the vibrancy of Chapel Hill and the 100 block of Franklin Street.”

Guskiewicz announced another tangible result of innovative research attracting funding in his meeting remarks — a $65 million award from the National Institute of Allergy and Infectious Diseases to the UNC Gillings School of Global Public Health. The grant will establish the Antiviral Drug Discovery Center to develop oral antivirals that can combat pandemic-level viruses like COVID-19. The center builds upon UNC’s Rapidly Emerging Antiviral Drug Development Initiative.

Brian James

Incoming chief of UNC Police Brian James addresses the board during the UNC Board of Trustees full board meeting May 19. (Jon Gardiner/UNC-Chapel Hill)

A new slate of campus leaders

Guskiewicz introduced trustees to four recently hired members of his leadership team:

  • Janet Guthmiller, new dean of Adams School of Dentistry and Claude A. Adams Distinguished Professor, effective Oct. 15.
  • James W.C. White, new dean of the College of Arts & Sciences, effective July 1.
  • Brian James, new chief of UNC Police, effective July 1.
  • Amy McConkey, new director of state affairs.

Not in attendance but also mentioned in the chancellor’s remarks was Valerie Howard, new dean of School of Nursing, effective Aug. 1.

Another new leader at the May meeting was Student Body President Taliajah Vann, who took the oath of office to become an ex officio member of the board for the next year. “I am excited to work within this space,” Vann said.

In addition to the Institute for Convergent Science, the trustees voted to approve:

Trustees also received the following reports:

  • A budget update and a plan to implement OneStream software as the new campus-wide budget tool, from Nathan Knuffman, vice chancellor for finance and operations.
  • An overview of the Office of Institutional Integrity and Risk Management, from George Battle, vice chancellor for institutional integrity and risk management.
  • Remarks from Katie Musgrove, Employee Forum chair, who reminded trustees that staff are struggling because of the Great Resignation and a “plague of lingering vacancies” that have left them “overtasked and burned out.”

Adblock test (Why?)

Monday, May 23, 2022

Meta Tries Making Human Evaluation of Machine Translation More Consistent - Slator - Translation

Although automatic evaluation metrics, such as BLEU, have been widely used in industry and academia to evaluate machine translation (MT) systems, human evaluators are still considered the gold standard in quality assessment.

Human evaluators use quite different criteria when evaluating MT output. These are determined by their linguistic skills and translation-quality expectations, exposure to ΜΤ output, presentation of source or reference translation, and unclear descriptions of the evaluation categories, among others. 

“This is especially [problematic] when the goal is to obtain meaningful scores across language pairs,” according to a recent study by a multidisciplinary team from Meta AI that includes Daniel Licht, Cynthia Gao, Janice Lam, Francisco Guzman, Mona Dia, and Philipp Koehn.

To address this challenge, the authors proposed in their May 2022 paper, Consistent Human Evaluation of Machine Translation across Language Pairs, a novel metric. Called XSTS, it is more focused on meaning (semantic) equivalence and cross-lingual calibration, which enables more consistent assessment.

Adequacy Over Fluency

XSTS — a cross-lingual variant of STS (Semantic Textual Similarity) — estimates the degree of similarity in meaning between source sentence and MT output. The researchers used a five-point scale, where 1 represents no semantic equivalence and 5 represents exact semantic equivalence.

The new metric emphasizes adequacy rather than fluency, mainly due to the fact that assessing fluency is much more subjective. The study noted that subjectivity leads to higher variability and the preservation of meaning is a pressing challenge in many low-resource language pairs.

The authors compared XSTS to Direct Assessment (i.e., the expression of a judgment on the quality of MT output using a continuous rating scale) as well as some variants of XSTS, such as Monolingual Semantic Textual Similarity (MSTS), Back-translated Monolingual Semantic Textual Similarity (BT+MSTS), and Post-Editing with critical errors (PE).

They found that “XSTS yields higher inter-annotator agreement compared [to] the more commonly used Direct Assessment.”

Cross-Lingual Consistency

“Even after providing evaluators with instruction and training, they still show a large degree of variance in how they apply scores to actual examples of machine translation output,” wrote the authors. “This is especially the case, when different language pairs are evaluated, which necessarily requires different evaluators assessing different output.”

To address this issue, the authors proposed using a calibration set that is common across all languages and consists of MT output and corresponding reference translation. The sentence pairs of the calibration set should be carefully selected to cover a wide quality range, based on consistent assessments from previous evaluations. These scores can then be used as the “consensus quality score.”

Evaluators should assess this fixed calibration set in addition to the actual evaluation task. Then the average score each evaluator gives to the calibration set should be calculated.

According to the authors, “The goal of calibration is to adjust raw human evaluation scores so that they reflect meaningful assessment of the quality of the machine translation system for a given language pair.”

Given that the calibration set is fixed, quality is fixed, and the average score each evaluator assigns to any sentence pair in the set should be the same. Hence, the score assigned by each evaluator and the official fixed score can be used to make adjustments to each evaluator’s score. 

“If this evaluator-specific calibration score is too high, then we conclude that the evaluator is generally too lenient and their scores for the actual task need to be adjusted downward, and vice versa,” explained the authors.

For example, if the consensus quality score for the calibration set is 3.0 but an evaluator assigned it a score of 3.2, then 0.2 from all their scores for the actual evaluation task should be deducted.

The authors concluded that the calibration leads to improved correlation of system scores to subjective expectations of quality based on linguistic and resource aspects, as well as to improved correlation with automatic scores, such as BLEU.

Adblock test (Why?)

Sunday, May 22, 2022

Translation efforts of Jehovah's Witnesses reach Alabama residents in the language of their Hearts - Elmore Autauga News - Translation

From Jehovah’s Witnesses of the United States of America Organization

‘Straight to the Heart’: Unprecedented Translation Work brings Words to Life for Millions

Jin Gim (pronounced Kim) joined his family in the United States when he was 28 years old. Montgomery resident Gim was born and raised in Seoul, South Korea and did not speak English. However, prior to his arrival, he had been baptized as one of Jehovah’s Witnesses. The organization first began publishing the Watchtower and Awake! magazines in the Korean language in 1952. Today, in the Republic of Korea there are 1,270 congregations. However, in Alabama, there is only one.

Says Gim, “I knew very few words of English when I first came to the U.S., and it’s still hard for me. It’s really awesome and fantastic that I can attend meetings and study Christian publications in my own language.”

Why do Jehovah’s Witnesses put so much effort into translation, including for some smaller language groups?

“We understand that a region’s official language may not be the language of a person’s heart,” said Robert Hendriks, the U.S. spokesperson for Jehovah’s Witnesses.

In the United States alone, some 67 million residents speak a language other than English at home.

According to UNESCO, education based on the language one speaks at home results in better quality learning, fosters respect and helps preserve cultural and traditional heritage. “The inclusion of languages in the digital world and the creation of inclusive learning content is vital,” according to its website.

That’s true for all ages and for all types of education.

“Translating spiritually uplifting material into over 1,000 languages takes a considerable amount of time and resources,” said Hendriks, “but we know that reaching a person’s heart with the Bible’s comforting message can only be accomplished if they fully understand it.”

Gim and others in the Korean-language congregation regularly reach out to their Korean-speaking neighbors in Montgomery. Although the door-to-door work of Jehovah’s Witnesses has been suspended since March 2020, they continue their ministry by writing letters and making telephone calls. Before the pandemic, Gim recalls engaging in the door-to-door work and one resident, who had recently arrived from Korea, remarked, “Oh, my! The Witnesses are here, too!”

Before moving to Alabama, Gim had also assisted Korean-speaking Witnesses in

California, North Carolina, and Georgia.

“No matter where I live, my fellow Christian friends are always there to help me,” says Gim. “My family and I have always been welcomed and we know that as Jehovah’s Witnesses, we are all one.”

To learn more about the translation work of Jehovah’s Witnesses visit

https://ift.tt/lHR8MSK

Adblock test (Why?)