Thursday, December 14, 2023

Respond Crisis Translation Founded by Google Vet Helps Migrants at the Border - Bloomberg - Translation

Last summer, Rosie Ibarra Lopez was meeting with a Mauritanian man at a US immigration detention center in Arizona, where she works with a nonprofit that assists asylum-seekers. She asked whether he spoke French. He shook his head. “Wolof?” she asked, a language spoken in parts of West Africa. Again, no. She reeled off a litany of possibilities, but each time the response was no. Finally she tried Pulaar, a language from the river basin shared by Senegal and Mauritania. He flashed her a look of relief.

Speaking no Pulaar, Ibarra did what advocates along the US-Mexico border increasingly do these days: She dashed off an email to Respond Crisis Translation, which was able to round up a Pulaar interpreter for her next meeting with the man. The goal, Ibarra says, is to prepare migrants for a legal process that can last months or even years, “but we can only do that if we have adequate interpretation.”

Adblock test (Why?)

This Mind-Reading Cap Can Translate Thoughts to Text Thanks to AI - Singularity Hub - Translation

Wearing an electrode-studded cap bristling with wires, a young man silently reads a sentence in his head. Moments later, a Siri-like voice breaks in, attempting to translate his thoughts into text, “Yes, I’d like a bowl of chicken soup, please.” It’s the latest example of computers translating a person’s thoughts into words and sentences.

Previously, researchers have used implants surgically placed in the brain or bulky, expensive machines to translate brain activity into text. The new approach, presented at this week’s NeurIPS conference by researchers from the University of Technology Sydney, is impressive for its use of a non-invasive EEG cap and the potential to generalize beyond one or two people.

The team built an AI model called DeWave that’s trained on brain activity and language and linked it up to a large language model—the technology behind ChatGPT—to help convert brain activity into words. In a preprint posted on arXiv, the model beat previous top marks for EEG thought-to-text translation with an accuracy of roughly 40 percent. Chin-Teng Lin, corresponding author on the paper, told MSN they’ve more recently upped the accuracy to 60 percent. The results are still being peer-reviewed.

Though there’s a long way to go in terms of reliability, it shows progress in non-invasive methods of reading and translating thoughts into language. The team believes their work could give voice to those who can no longer communicate due to injury or disease or be used to direct machines, like walking robots or robotic arms, with thoughts alone.

Guess What I’m Thinking

You may remember headlines about “mind-reading” machines translating thoughts to text at high speed. That’s because such efforts are hardly new.

Earlier this year, Stanford researchers described work with a patient, Pat Bennett, who’d lost the ability to speak due to ALS. After implanting four sensors into two parts of her brain and extensive training, Bennett could communicate by having her thoughts converted to text at a speed of 62 words per minute—an improvement on the same team’s 2021 record of 18 words per minute.

It’s an amazing result, but brain implants can be risky. Scientists would love to get a similar outcome without surgery.

In another study this year, researchers at the University of Texas at Austin turned to a brain-scanning technology called fMRI. In the study, patients had to lie very still in a machine recording the blood flow in their brains as they listened to stories. After using this data to a train an algorithm—based in part on ChatGPT ancestor, GPT-1—the team used the system to guess what participants were hearing based on their brain activity.

The system’s accuracy wasn’t perfect, it required heavy customization for each participant, and fMRI machines are bulky and expensive. Still, the study served as a proof of concept that thoughts can be decoded non-invasively, and the latest in AI can help make it happen.

The Sorting Hat

In Harry Potter, students are sorted into school houses by a magical hat that reads minds. We muggles resort to funny looking swim caps punctured by wires and electrodes. Known as electroencephalograph (EEG) caps, these devices read and record the electrical activity in our brains. In contrast with brain implants, they require no surgery but are considerably less accurate. The challenge, then, is to separate signal from noise to get a useful result.

In the new study, the team used two datasets containing eye-tracking and EEG recordings from 12 and 18 people, respectively, as they read text. Eye-tracking data helped the system slice up brain activity by word. That is, when a person’s eyes flit from one word to the next, it means there should be a break between the brain activity associated with that word and the activity that ought to be correlated with the next one.

They then trained DeWave on this data, and over time, the algorithm learned to associate particular brain wave patterns with words. Finally, with the help of a pre-trained large language model called BART—fine-tuned to understand the model’s unique output—the algorithm’s brain-wave-to-word associations were translated back into sentences.

In tests, DeWave outperformed top algorithms in the category in both the translation of raw brain waves and brain waves sliced up by word. The latter were more accurate, but still lagged way behind translation between languages—like English and French—and speech recognition. They also found the algorithm performed similarly across participants. Prior experiments have tended to report results for one person or require extreme customization.

The team says the research is more proof large language models can help advance brain-to-text systems. Although they used a relatively antique algorithm in the official study, in supplementary material they included results from larger models, including Meta’s original Llama algorithm. Interestingly, the larger algorithms didn’t improve results much.

“This underscores the complexity of the problem and the challenges of bridging brain activities with LLMs,” the authors wrote, calling for more nuanced research in the future. Still, the team hopes they can push their own system further, perhaps up to 90 percent accuracy.

The work shows progress in the field.

“People have been wanting to turn EEG into text for a long time and the team’s model is showing a remarkable amount of correctness,” the University of Sydney’s Craig Jin told MSN. “Several years ago the conversions from EEG to text were complete and utter nonsense.”

Image Credit: University of Technology Sydney

Adblock test (Why?)

Wednesday, December 13, 2023

Dictionary.com names its Word of the Year. It's probably not what you think it is. - Mashable - Dictionary

Dictionary.com has announced their Word of the Year for 2023 and, in a move that should surprise few, it is related to the boom in artificial intelligence.

The Word of the Year is "hallucinate." At first blush that might not seem AI-related. You might've guessed words like, you know, "artificial" or "AI" itself. But "hallucinate," as Dictionary.com explains, is a major word in the AI world and one the site chose with a purpose.

As Dictionary.com defines it, in AI terms, hallucinate means "to produce false information contrary to the intent of the user and present it as if true and factual."

SEE ALSO: The ultimate AI glossary to help you navigate our changing world

In a year that AI went mainstream, hallucinate stood out as a particularly important word. Dictionary.com noted it saw a 46 percent increase in lookups in 2023 and an 85 percent uptick in media usage.

"Hallucinate is particularly notable among the terms that AI has popularized because it refers not to an aspect of how AI functions but to one of the ways it can malfunction," Dictionary.com wrote in a statement announcing the Word of the Year. "In this way, it’s akin to other cautionary tech terms, like spam and virus, that are now entrenched in our language.

This is just one of the reasons that our lexicographers expect the word to stay relevant—at least into the near future."

SEE ALSO: 5 ways AI changed the internet in 2023

For better or worse, we're all going to be learning and using AI-related terms for the foreseeable future. Mashable's Cecily Mauran, in fact, wrote a comprehensive glossary of all the AI terms you need to know. Among the words in the glossary: hallucination. As Mauran notes, some folks might think AI is all-knowing and super-capable, but the fact that this term exists proves otherwise.

Wrote Mauran: "[Hallucination] happens because generative AI models work by predicting words based on probabilistic relation to the previous word. It isn't capable of understanding what it's generating. Let that be a reminder that ChatGPT might act sentient, but it's not."

Topics Artificial Intelligence

Adblock test (Why?)

Leave the World Behind: What Did the Woman Say in Spanish? Translated - Yahoo Entertainment - Translation

Leave the World Behind features a scene that people incapable of speaking or understanding Spanish couldn’t comprehend. The scene and its meaning are hard to understand because the film doesn’t provide subtitles for the Spanish dialogue. So, what did the Spanish lady say? Here’s all you need to know.

What did the woman say in Spanish in Leave the World Behind?

Leave the World Behind’s Spanish-speaking lady asked Clay Sandford for his help as she was lost and clueless.

After the mysterious cyber-attack took down all communications, Ethan Hawke’s Clay Sandford drove out to town to get some information about what was happening. But on his way, he got stopped by a panicking woman who seemingly asked for his help (After 50 minutes pass in the film). The cast list reveals that the lady’s name is Salvadora, played by actress Vanessa Aspillaga.

However, since Clay was not able to understand her, he left her behind. However, for fans who wish to find out what she said, here’s the English translation:

“Thank God I found someone! I’m trying to get back to my home! I’m lost! I’ve been walking for a while! I need to use your phone! You’re the first person I’ve seen all day! We have to get out of here!

I just saw a plane that was spraying red gas in the vicinity. I saw some deer, more than 50. They were coming out of the woods. Please! I need to go home, sir. A military plane appeared and fled. There’s no one around! Is it a chemical attack?”

But what does this scene truly mean? The film plays with quite a few themes including trust and humanity. Being afraid of a stranger, Clay simply left her behind even though he knew that he had done something that was morally incorrect by not helping out someone in need.

Furthermore, another fun part about the scene is that Clay wanted to find out more information about what was happening. Had he shown a little more faith and humanity, and not been selfish or afraid, his problems possibly could have been solved much sooner.

For more Leave the World Behind updates, find out what role Obama played in its development. Also, check out how Elon Musk reacted to the Tesla scene.

The post Leave the World Behind: What Did the Woman Say in Spanish? Translated appeared first on ComingSoon.net - Movie Trailers, TV & Streaming News, and More.

Adblock test (Why?)

Tuesday, December 12, 2023

Mind-reading AI can translate brainwaves into written text - New Scientist - Translation

An AI can decode brainwave recordings to predict the words someone is reading

Vertigo3d/Getty Images

Using only a sensor-filled helmet combined with artificial intelligence, a team of scientists has announced they can turn a person’s thoughts into written words.

In the study, participants read passages of text while wearing a cap that recorded electrical brain activity through their scalp. These electroencephalogram (EEG) recordings were then converted into text using an AI model called DeWave.

Chin-Teng Lin at the University of Technology Sydney (UTS), Australia, says the technology is non-invasive, relatively inexpensive and easily transportable.

While the system is far from perfect, with an accuracy of approximately 40 per cent, Lin says more recent data currently being peer-reviewed shows an improved accuracy exceeding 60 per cent.

In the study presented at the NeurIPS conference in New Orleans, Louisiana, participants read the sentences aloud, even though the DeWave program doesn’t use spoken words. However, in the team’s latest research, participants read the sentences silently.

Last year, a team led by Jerry Tang at the University of Texas at Austin reported a similar accuracy in converting thoughts to text, but MRI scans were used to interpret brain activity. Using EEG is more practical, as subjects don’t have to lie still inside a scanner.

The DeWave model was trained by looking at lots of examples where brain signals match up with specific sentences, says team member Charles Zhou at UTS.

“For instance, when you think about saying ‘hello’, your brain sends out certain signals,” says Zhou. “DeWave learns how these signals relate to the word ‘hello’ by seeing many examples of these signals for different words or sentences.”

Once DeWave understood the brain signals well, the team connected it to an open-source large language model (LLM), akin to the AI that powers ChatGPT.

“This LLM is like a brainy writer that can make sentences. We tell this writer to pay attention to the signals from DeWave and use them as a guide to create sentences,” says Zhou.

Finally, the team trained both DeWave and the language model together to get even better at writing sentences based on the EEG data.

With further refinement, the researchers predict that the system could revolutionise communication for people who have lost speech, such as those affected by a stroke, and could also have applications in robotics.

Craig Jin at the University of Sydney says he is impressed with the work by Lin’s team. “It’s excellent progress,” he says.

“People have been wanting to turn EEG into text for a long time and the team’s model is showing a remarkable amount of correctness. Several years ago, the conversions from EEG to text were complete and utter nonsense.”

Topics:

Adblock test (Why?)

Dictionary.com chooses ‘hallucinate’ as 2023’s Word of the Year: Why? - The Hill - Dictionary

Skip to content

(NEXSTAR) – Dictionary.com has chosen “hallucinate” as its 2023 Word of the Year, but not in its traditional, trippy sense.

Instead, Dictionary.com is highlighting the word’s increased usage among users and critics of artificial intelligence (AI) programs, who have adopted the term to describe the inaccurate and often outlandish outputs that chatbots and other prompt-based AI programs attempt to present as fact.

Specifically, Dictionary.com’s latest definition of “hallucinate” reads as follows:

Computers, Digital Technology. (of a machine learning program) to produce false information contrary to the intent of the user and present it as if true and factual.

Dictionary.com

Popular AI chatbot programs can sometimes fall victim to these hallucinations, unintentionally sharing falsehoods with users. But hallucinations can also affect computer vision tools, which aim to assess and make recommendations based on visual data, IBM explains.

“For example, a healthcare AI model might incorrectly identify a benign skin lesion as malignant, leading to unnecessary medical interventions,” the company writes.

Hallucinations can also contribute to the spread of inaccurate or even prejudiced articles or research results, depending on where the programs are pulling information from.

Leaders in the AI field are optimistic that “hallucinations” will become fewer and further between in time, but others aren’t so sure they’ll ever reach a stage where fact-checking is no longer necessary, the Associated Press recently reported.

“Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure,” Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told the outlet this year.

The term “hallucinate,” in the AI sense, doesn’t appear to be going anywhere either, according to Dictionary.com. The editors or the site say there’s been a significant increase (46%) in the number of users looking up its latest definition of “hallucinate” in 2023.

“Hallucinate as our 2023 Word of the Year encapsulates technology’s continuing impact on social change, and the continued discrepancy between the perfect future we envision and the messy one we actually achieve,” said Grant Barrett, the head of lexicography at Dictionary.com.

Tags

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Adblock test (Why?)

Leave the World Behind: What Did the Woman Say in Spanish? Translated - ComingSoon.net - Translation

Leave the World Behind features a scene that people incapable of speaking or understanding Spanish couldn’t comprehend. The scene and its meaning are hard to understand because the film doesn’t provide subtitles for the Spanish dialogue. So, what did the Spanish lady say? Here’s all you need to know.

What did the woman say in Spanish in Leave the World Behind?

Leave the World Behind’s Spanish-speaking lady asked Clay Sandford for his help as she was lost and clueless.

After the mysterious cyber-attack took down all communications, Ethan Hawke’s Clay Sandford drove out to town to get some information about what was happening. But on his way, he got stopped by a panicking woman who seemingly asked for his help (After 50 minutes pass in the film). The cast list reveals that the lady’s name is Salvadora, played by actress Vanessa Aspillaga.

However, since Clay was not able to understand her, he left her behind. However, for fans who wish to find out what she said, here’s the English translation:

“Thank God I found someone! I’m trying to get back to my home! I’m lost! I’ve been walking for a while! I need to use your phone! You’re the first person I’ve seen all day! We have to get out of here!

I just saw a plane that was spraying red gas in the vicinity. I saw some deer, more than 50. They were coming out of the woods. Please! I need to go home, sir. A military plane appeared and fled. There’s no one around! Is it a chemical attack?”

But what does this scene truly mean? The film plays with quite a few themes including trust and humanity. Being afraid of a stranger, Clay simply left her behind even though he knew that he had done something that was morally incorrect by not helping out someone in need.

Furthermore, another fun part about the scene is that Clay wanted to find out more information about what was happening. Had he shown a little more faith and humanity, and not been selfish or afraid, his problems possibly could have been solved much sooner.

For more Leave the World Behind updates, find out what role Obama played in its development. Also, check out how Elon Musk reacted to the Tesla scene.

Adblock test (Why?)