Wednesday, May 3, 2023

AI translation puts asylum seekers in jeopardy - Boing Boing - Translation

Maybe you've used a translation program powered by neural networks or artificial intelligence and ran into an embarrassing or amusing error.

Imagine how much more likely those kinds of errors could be if the language you were translating from or into was relatively obscure, with its speakers neither numerous nor economically powerful.

And now imagine that every nuance of that translation affected the health and well being of you and your family.

Afghan refugees applying for U.S. asylum are finding that just these types of errors can lead to rejected applications and dire consequences.

Human translators are expensive, but the cheaper alternative of AI translators can lead to disastrous consequences, according to an article in Rest of World.

"In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

"A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee's asylum bid because her written application didn't match the story told in the initial interviews.

"In the interviews, the refugee had first maintained that she'd made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

"After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the 'I' pronouns in the woman's statement to 'we.'"

Damian Harris-Hernandez, co-founder of the Refugee Translation Project, told Rest of World: "[Machine translation] doesn't have a cultural awareness. Especially if you're doing things like a personal statement that's handwritten by someone. … The person might not be perfect at writing, and also might use metaphors, might use idioms, turns of phrases that if you take literally, don't make any sense at all."

Despite these dangerous flaws, translation companies are marketing their services to U.S. government agencies and to refugee organizations. 

At least one AI developer has recognized the risks.

"OpenAI, the company that makes ChatGPT, updated its user policies in late March with rules that prohibit the use of the AI chatbot in 'high-risk government decision-making,' including work related to both migration and asylum."

Adblock test (Why?)

No comments:

Post a Comment