Thursday, May 4, 2023

Welsh language oath asks people to curse at God - BBC - Translation

Prince Charles at the state opening of parliament in 2022Getty Images

A citizenship oath written in Welsh has been criticised for an error asking people to literally curse at God.

Wales' education minister Jeremy Miles says he will write to the UK government to remind them to use the language accurately.

It comes after an emergency mobile text alert in Welsh used a made-up word.

The Home Office, which produced the oath of allegiance, said it recognises the "importance" of correct translation.

People taking part in a citizenship ceremony have to give either the oath or the affirmation of allegiance to the King, as well as a pledge of loyalty to the UK.

A Home Office website, which is still live and has not been corrected, gives the English version of the oath of allegiance as: "I, (name), swear by Almighty God that, on becoming a British citizen, I will be faithful and bear true allegiance to His Majesty King Charles III, His Heirs and Successors, according to law."

But in Welsh it uses the term "rhegi", which means to curse. The Welsh for a vow is "tyngu".

It also uses the word "omnipotent" - which is not a Welsh word - as a translation of "almighty", instead of the Welsh "hollalluog".

Jeremy Miles

The alternative affirmation of allegiance in Welsh includes mutations that should only be used for a female monarch.

Mutations are ways in which words change according to context.

There is also a grammatical error in translating "freedoms" in the Welsh pledge.

Last month, a translation blunder that saw a Slovenian ski resort mentioned in the Welsh version of the emergency alert test was blamed on autocorrect.

For the Welsh for "others safe", the test message read "eraill yn Vogel" instead of "eraill yn ddiogel".

'Status of the Welsh language'

Earlier this week Plaid Cymru MS Llyr Gruffydd told the Senedd while mistranslations or spelling errors can be amusing at first sight "they do send a very unfortunate message in terms of the status of the Welsh language, when we see these examples being tolerated far too often".

He called on the education minister to write to public sector bodies in Wales, and to the UK government, "to encourage them and to remind them of their responsibilities in this regard".

Mr Miles replied that he is "very happy to do that", adding "if there was less emphasis on complaining about renaming Bannau Brycheiniog, and more emphasis on accuracy, we might all be happier".

Rishi Sunak said last week he will keep using the English name Brecon Beacons, which the national park has dropped.

E-mail out of office message on a road sign

Mr Miles said that the Welsh government facilitates Welsh language accuracy "through funding resources such as Welsh language spell-checkers and grammar checkers".

The Home Office said: "We recognise that a correct translation of the citizenship oath and pledge on gov.uk is important to reflect the significance of becoming a British citizen."

  • E-mail error ends up on road sign
  • Autocorrect blamed for emergency alert Welsh gaffe
  • Gun salute in Wales as Charles proclaimed king

Manon Cadwaladr, chair of Cymdeithas Cyfieithwyr Cymru, the Association of Welsh Translators, said: "More and more people are using translation machines and we recognize that the quality of those machines is gradually improving.

"Nevertheless, good and accurate translation is specialist work. It is a craft. It requires specific skills, as well as experience".

She added that "to translate correctly into Welsh requires a real understanding of our language, our culture and the audience".

Related Topics

  • Home Office
  • Welsh language
  • Welsh government
  • King Charles III

Adblock test (Why?)

Assam bilingual Braille dictionary is world’s largest lexicon - The Hindu - Dictionary

GUWAHATI

A 21-volume Braille version of an Assamese-English dictionary weighing more than 80 kg, has become the world’s largest lexicon.

The authorities of the Guinness World Records on May 1 handed over a certificate to Jayanta Baruah, the publisher of the Braille version of Hemkosh, acknowledging its inclusion in the book as the world’s largest bilingual Braille dictionary.

The 123-year-old Hemkosh is the first etymological dictionary of the Assamese language based on Sanskrit spellings, compiled by Mr. Baruah’s grandfather, Hemchandra Baruah.

The 10,279-page Braille version of the Hemkosh has 90,640 words printed in 21 volumes and weighs 80.800 kg. A copy of the dictionary, released in September 2022, was presented to Prime Minister Narendra Modi in New Delhi.

“A world record is always special. But what matters most is illuminating the lives of the visually impaired with the power of words,” Mr. Baruah said while accepting the certificate along with Assam Governor Gulab Chand Kataria.

Adblock test (Why?)

Wednesday, May 3, 2023

AI translation puts asylum seekers in jeopardy - Boing Boing - Translation

Maybe you've used a translation program powered by neural networks or artificial intelligence and ran into an embarrassing or amusing error.

Imagine how much more likely those kinds of errors could be if the language you were translating from or into was relatively obscure, with its speakers neither numerous nor economically powerful.

And now imagine that every nuance of that translation affected the health and well being of you and your family.

Afghan refugees applying for U.S. asylum are finding that just these types of errors can lead to rejected applications and dire consequences.

Human translators are expensive, but the cheaper alternative of AI translators can lead to disastrous consequences, according to an article in Rest of World.

"In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

"A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee's asylum bid because her written application didn't match the story told in the initial interviews.

"In the interviews, the refugee had first maintained that she'd made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

"After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the 'I' pronouns in the woman's statement to 'we.'"

Damian Harris-Hernandez, co-founder of the Refugee Translation Project, told Rest of World: "[Machine translation] doesn't have a cultural awareness. Especially if you're doing things like a personal statement that's handwritten by someone. … The person might not be perfect at writing, and also might use metaphors, might use idioms, turns of phrases that if you take literally, don't make any sense at all."

Despite these dangerous flaws, translation companies are marketing their services to U.S. government agencies and to refugee organizations. 

At least one AI developer has recognized the risks.

"OpenAI, the company that makes ChatGPT, updated its user policies in late March with rules that prohibit the use of the AI chatbot in 'high-risk government decision-making,' including work related to both migration and asylum."

Adblock test (Why?)

Israeli experts create AI to translate ancient cuneiform text - study - The Jerusalem Post - Translation

Experts in Assyriology – who specialize in the archaeological, historical, cultural and linguistic study of Assyria and the rest of ancient Mesopotamia (Iraq) –spend many years painstakingly trying to understand Akkadian texts written in cuneiform, one of the oldest forms of writing known.

Cuneiform is translated as “wedge-shaped” because in ancient times, people wrote it using a reed stylus cut to make a wedge-shaped mark on a clay tablet.

But now, researchers at Tel Aviv University (TAU) and Ariel University have developed an artificial intelligence model that will save all this effort. The AI model can automatically translate Akkadian text written in cuneiform into English.

Who were the ancient Assyrians?

In 721 BCE, Assyria swept out of the North, captured the Northern Kingdom of Israel and took the Ten Tribes into captivity, after which they became lost to history. Assyria, named for the god Ashur (highest in the pantheon of Assyrian gods), was located in the Mesopotamian plain. Historians note that Assyrian Jews first appeared in that region when the Israelites were exiled there, and they lived continuously alongside the Assyrian people in the territories after the Assyrian exile.

Hundreds of thousands of clay tablets from ancient Mesopotamia, written in cuneiform and dating back as far as 3,400 BCE have been found by archeologists – far more than could easily be translated by the limited number of experts who can read them.

 Siege scene with two massive L-shaped shields protecting Assyrian soldiers, in a relief from the palace of Tiglath-Pileser III at Nimrud (credit: Courtesy of the British Museum) Siege scene with two massive L-shaped shields protecting Assyrian soldiers, in a relief from the palace of Tiglath-Pileser III at Nimrud (credit: Courtesy of the British Museum)

Dr. Shai Gordin of Ariel University and Dr. Gai Gutherz, Dr. Jonathan Berant and Dr. Omer Levy of TAU and colleagues have just published their findings in the journal PNAS Nexus under the title “Translating Akkadian to English with neural machine translation.”

When they developed the new machine-learning model, they trained two versions – one that translates the Akkadian from representations of the cuneiform signs in Latin script and another that translates from unicode representations of the cuneiform signs. The first version, using Latin transliteration, gave more satisfactory results in this study, achieving a score of 37.47 in the Best Bilingual Evaluation Understudy 4 (BLEU4), which is a test of the level of correspondence between machine and human translation of the same text.

The program is most effective when translating sentences of 118 or fewer characters. In some of the sentences, the program produced “hallucinations” – output that was syntactically correct in English but not accurate.

Gordin noted that in most cases, the translation would be usable as a first-pass at the text. The authors propose that machine translation can be used as part of a “human-machine collaboration,” in which human scholars correct and refine the models’ output.

Hundreds of thousands of clay tablets inscribed in the cuneiform script document the political, social, economic and scientific history of ancient Mesopotamia, they wrote. “Yet, most of these documents remain untranslated and inaccessible due to their sheer number and the limited quantity of experts able to read them.”

They concluded that translation is a fundamental human activity, with a long scholarly history since the beginning of writing. “It can be a complex process, since it commonly requires not only expert knowledge of two different languages but also different cultural milieus. Digital tools that can assist with translation are becoming more ubiquitous every year, tied to advances in fields like optical character recognition (OCR) and machine translation. Ancient languages, however, still pose a towering problem in this regard. Their reading and comprehension require knowledge of a long-dead linguistic community, and moreover, the texts themselves can also be very fragmentary.”

Adblock test (Why?)

*Press Release* memoQ Introduces TM+, the Next-Generation Translation Memory Engine - Slator - Translation

The translation industry significantly benefits from the constant evolution of translation technology. The current resources available greatly enhance work efficiency, but it is crucial to keep up with the latest technologies to stay competitive in the ever-changing landscape of the industry.

Translation memories are an essential tool in localization, allowing users to leverage previously translated materials. The developers behind memoQ have noticed a significant increase in the size and number of translation memories being created in memoQ products. To facilitate the utilization of these resources and to showcase that translation memory technology has not yet reached its peak, memoQ has developed TM+, the next-generation translation memory engine.

As the amount of translation work increases, it becomes crucial to have a translation memory engine that is consistently fast and dependable. TM+ provides a reliable solution that can efficiently manage over 10 million segments without performance degradation. The main benefits include dramatically improved performance for TMX imports, pre-translation, statistics, and lookup results. With the arrival of TM+, users can expect a significant improvement in their overall productivity.

Combining the most advanced translation memory and machine translation technologies can result in an optimal and predictable localization workflow that delivers high-performance and scalable results. memoQ users can choose from a myriad of machine translation engines through plugins, with new ones being continuously added to ensure access to the latest MT technologies. TM+ offers the highest-quality translation memory engine that can be used together with specialized machine translation systems provided by memoQ’s technology partners.

Florian Sachse, Chief Evangelist at memoQ
Florian Sachse, Chief Evangelist at memoQ

“Even though machine translation has become more widely used, translation memories stay relevant in all cases where predictability and repeatability in translation workflows are required. Translations stored in translation memories continue to grow with some TMs containing millions of translation units. Updating our codebase is one of our focus areas and the revamped TM+ is an important milestone in our product strategy.” – shared Florian Sachse, Chief Evangelist at memoQ

TM+ is initially available in memoQ 10.0 and will be continuously improved with new updates and features in future releases. Over the next two years, TM+ will gradually replace the classic TM, but support for the classic TM will still be available until 2025.

Click here to learn more about TM+ and other productivity boosters memoQ 10.0 offers!

Adblock test (Why?)

Tuesday, May 2, 2023

Lost Dreamcast classic Rent-A-Hero No. 1 gets fan translation - Destructoid - Translation

Modders doing what Sega don’t

We never got a taste of Sega’s 2000 Dreamcast title, Rent-A-Hero No. 1. But fans have stepped in where the company let us down by creating a very comprehensive translation of the title.

Originally released in Japan in May, 2000, Rent-A-Hero No. 1 was a game that got lost in the implosion of the Dreamcast. Sega eventually ported it to Xbox in 2003, but since that was when the company’s support of Microsoft was starting to waver, it never made it across the pond. This is despite the fact that some reviewers at the time were actually provided with copies of the translated version.

Rent-A-Hero No. 1 is an action RPG about a 16-year-old who gets a set of armor that allows them to take a part-time job as a super-hero. Perhaps Sega never wanted to market it over here because of its deep roots in Japanese culture. That’s less of a problem in today’s world of Yakuza and Persona titles, but at the time, it was enough to give publishers pause.

Rent-A-Hero No. 1, is actually a remake of the 1991 Genesis/Mega Drive title that also was never localized. You know, it’s never too late, Sega.

Until they realize that, a hefty team of unofficial modders (dubbed the Rent-A-Modders) did a pretty outstanding job at handling the localization. Beyond translating the text, the team added many exclusive features, including new models and animations, VMU graphics, and secret modes. It’s a hefty little patch that has a lot of love behind it.

You can find the patch, as well as the full credits of the Rent-A-Modders, right here on their Github.

Zoey Handley
Staff Writer - Zoey is a gaming gadabout. She got her start blogging with the community in 2018 and hit the front page soon after. Normally found exploring indie experiments and retro libraries, she does her best to remain chronically uncool.

Adblock test (Why?)

New AI decoder can translate brainwaves into text - study - The Jerusalem Post - Translation

Scientists have developed a system that can read a person's mind and reproduce the brain activity in a stream of text, relying in part on a transformer model similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

This is an important step on the way to develop brain–computer interfaces that can decode continuous language through non-invasive recordings of thoughts.

Results were published in a recent study in the peer-reviewed journal Nature Neuroscience, led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.

A non-invasive method

Tang and Huth's semantic decoder isn't implanted in the brain directly; instead, it uses fMRI machine scans to measure brain activity. For the study, participants in the experiment listened to podcasts while the AI attempted to transcribe their thoughts into text. 

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said Alex Huth. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

 Illustrative image of artificial intelligence. (credit: PIXABAY) Illustrative image of artificial intelligence. (credit: PIXABAY)

These kinds of systems could be especially helpful to people who are unable to physically speak, such as those who have had a stroke, and enable them to communicate more effectively. 

According to Tang and Huth, study findings demonstrate the viability of non-invasive language brain–computer interfaces. They say that the semantic decoder still needs some more work and can only provide the basic “gist” of what someone is thinking. The AI decoder produced a text that closely matched a subject's thought only about half of the time.

The decoder in action

The study provides some examples of the decoder in action. In one case, a test subject heard, and consequently thought the sentence "... I didn't know whether to scream cry or run away instead I said leave me alone I don't need your help Adam disappeared."

The decoder reproduced this part of a sentence as "... started to scream and cry and then she just said I told you to leave me alone you can't hurt me anymore I'm sorry and then he stormed off." 

A work in progress

The researchers also added that they gave the aspect of mental privacy some concern. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said Jerry Tang. “We want to make sure people only use these types of technologies when they want to and that it helps them.”

For this reason, they also tested whether successful decoding requires the cooperation of the person being decoded, and found that cooperation is absolutely required for the decoder to work.

Huth and Tang believe their system could in the future be adapted to work with portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).

“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth concludes. “So, our exact kind of approach should translate to fNIRS.”

Adblock test (Why?)