Saturday, December 16, 2023

'Hallucinate' Is Dictionary.com's Word of the Year for 2023 - Smithsonian Magazine - Dictionary

Words on a page
This year, lookups on Dictionary.com increased for A.I.-related words, including "generative A.I.,” “GPT” and “chatbot.” Pexels

As 2023 draws to a close, Dictionary.com has picked “hallucinate” as its word of the year—but it may not mean exactly what you think it does.

Lexicographers for the popular online dictionary selected “hallucinate” in the context of artificial intelligence, they revealed in an announcement this week. In this realm, it’s a verb that means “to produce false information contrary to the intent of the user and present it as if true and factual.”

In other words, as Harmeet Kaur writes for CNN, it’s what happens when “chatbots and other A.I. tools confidently make stuff up.”

Recently, people have been using A.I.-powered chatbots for everything from writing essays to taking fast food orders. But the chatbots don’t always get things right, and their accuracy remains a vital issue in their development and widespread use.

The online dictionary’s lexicographers selected “hallucinate” because they’re confident that A.I. will be “one of the most consequential developments of our lifetime,” write Nick Norlen, Dictionary.com’s senior editor, and Grant Barrett, the site’s head of lexicography, in the blog post announcing the decision.

“Hallucinate seems fitting for a time in history in which new technologies can feel like the stuff of dreams or fiction—especially when they produce fictions of their own,” they add.

As A.I. tools have become more widespread, use of the word has skyrocketed. Digital media publications used “hallucinate” 85 percent more frequently in their articles this year than last year, and Dictionary.com recorded a 46 percent uptick in lookups for the word.

Other A.I.-related words and phrases also became more commonplace, including “LLM” (short for “large language model”), “generative A.I.,” “GPT” and “chatbot,” according to Dictionary.com. Dictionary lookups for A.I.-related words jumped by an average of 62 percent this year.

Dictionary.com 2023 Word of the Year

“Hallucinate” has been used in computer science since at least 1971, and it’s been linked with machine learning and A.I. since the 1990s. Despite the word’s history, Dictionary.com only added the A.I.-related definition to its site earlier this year.

From an etymological perspective, “hallucinate” derives from the Latin word ālūcinārī, which means “to dream” or “to wander mentally,” per Dictionary.com.

Experts say “hallucinate” is comparable to other tech terms that initially had different meanings, such as “spam” and “virus.” This kind of evolution is relatively common, and linguists even have a term for it: metaphorical extension.

“It takes an older word with a different meaning but gives it a new technology spirit,” Barrett tells USA Today’s Kinsey Crowley. “It also represents this unfortunate discrepancy between what we want to happen with technology—we want it to be perfect and great at solving problems—yet it’s never quite there.”

Other words that made the Dictionary.com shortlist are “strike,” “wokeism,” “indicted,” “wildfire” and “rizz.” That last one—an abbreviation of “charisma”—was so popular it was named Oxford’s 2023 word of the year.

Get the latest stories in your inbox every weekday.

Adblock test (Why?)

Friday, December 15, 2023

Dictionary.com 2023 Word Of The Year ‘Hallucinate’ Is An AI Health Issue - Forbes - Dictionary

Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which—depending on the info—could affect many, many people in a bad-for-your-health-and-well-being type of way. And this latter type of hallucinating has become increasingly common in 2023 with the continuing proliferation of AI. That’s why Dictionary.com has an AI-specific definition of “hallucinate” and has named the word as its 2023 Word of the Year.

Dictionary.com noticed a 46% jump in dictionary lookups for the word “hallucinate” from 2022 to 2023 with a comparable increase in searches for “hallucination” as well. Meanwhile, there was a 62% jump in searches for AI-related words like “chatbot”, “GPT”, “generative AI”, and “LLM.” So the increases in searches for “hallucinate” is likely due more to the following AI-specific definition of the word from Dictionary.com rather than the traditional human definition:

hallucinate [ huh-loo-suh-neyt ]-verb-(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.

Here’s a non-AI-generated new flash: AI can lie, just like humans. Not all AI, of course. But AI tools can be programmed to serve like little political animals or snake oil salespeople, generating false information while making it seem like it’s all about facts. The difference from humans is that AI can churn out this misinformation and disinformation at even greater speeds. For example, a study published in JAMA Internal Medicine last month showed how OpenAI's GPT Playground could generate 102 different blog articles “that contained more than 17,000 words of disinformation related to vaccines and vaping” within just 65 minutes. Yes, just 65 minutes. That’s about how long it takes to watch the TV show 60 Minutes and then make a quick uncomplicated bathroom trip that doesn’t involve texting on the toilet. Moreover, the study demonstrated how “additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes.” Yes, humans no longer corner the market on lying and propagating false information.

Even when there is no real intent to deceive, various AI tools can still accidentally churn out misleading information. At the recent American Society of Health-System Pharmacists’s Midyear Clinical Meeting, researchers from Long Island University's College of Pharmacy presented a study that had ChatGPT answer 39 medication-related questions. The results were largely ChatInaccuracy. Only 10 of these answers were considered satisfactory. Yes, just 10. One example of a ChatWTF answer was ChatGPT claiming that Paxlovid, a Covid-19 antiviral medication, and verapamil, a blood pressure medication, didn’t have any interactions. This went against the reality that taking these two medications together could actually lower blood pressure to potentially dangerously low levels. Yeah, in many cases, asking AI tools medical questions could be sort of like asking Clifford C. Clavin, Jr. from Cheers or George Costanza from Seinfeld for some medical advice.

Of course, AI can hallucinate about all sorts of things, not just health-related issues. There have been examples of AI tools mistakenly seeing birds everywhere when asked to read different images. And an article for The Economist described how asking ChatGPT the question, “When was the Golden Gate Bridge transported for the second time across Egypt,” yielded the following response: “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.” Did you catch that happening that month and year? That would have been disturbing news for anyone traveling from Marin County to San Francisco on the Golden Gate Bridge during that time period.

Then there was what happened in 2021 when the Microsoft Tay AI chatbot jumped on to Twitter and begin spouting out racist, misogynistic, and lie-filled tweets within 24 hours of being there. Microsoft soon pulled this little troublemaker off of the platform. The chatbot was sort of acting like, well, how many people act on X (formerly known as Twitter) act these days.

But even seemingly non-health-related AI hallucination can have significant health-related effects. Getting incensed by a little chatbot telling you about how you and your kind stink can certainly affect your mental and emotional health. And being bombarded with too many AI hallucinations can make you question your own reality. It could even get you to start hallucinating yourself.

All of this is why AI hallucinations like human hallucinations are a real health issue—one that’s growing more and more complex each day. The World Health Organization and the American Medical Association have already issued statements warning about the misinformation and disinformation that AI can generate and propagate. But that’s merely the tip of the AI-ceberg regarding what really needs to be done. The AI-version of the word “hallucinate” may be the 2023 Dictionary.com Word of the Year. But word is that AI hallucinations will only keep growing and growing in the years to come.

Adblock test (Why?)

What That Lady Was Saying To Clay In Spanish In Leave The World Behind (Translation Explained) - Screen Rant - Translation

Warning: Contains SPOILERS for Leave the World Behind.

Summary

  • The Spanish Lady's message to Clay remains ambiguous, leaving readers wondering what she was trying to tell him.
  • In the first arc, strangers are suspicious of each other in a Barbarian-esque narrative. In the second arc, the world outside the house crumbles and one character must leave to investigate.
  • Clay abandons the Spanish Lady due to a language barrier, unaware of the valuable insights she could provide. This highlights the theme of humans turning against each other in a world without technology.

Leave the World Behind intentionally maintains an air of ambiguity surrounding the Spanish Lady's message to Clay, making it hard not to wonder what she was telling him. In its first arc, Leave the World Behind adopts a Barbarian-esque narrative where strangers end up in the same house and cannot help but be suspicious of one another. However, in its second arc, the movie finally reveals how the world outside is crumbling down, and it won't be long before one of the main characters will have to leave the house to figure out what is going on.

With what follows, while George leaves home to check if any of his neighbors are still around, Clay heads to the main city to get a newspaper. To Clay's surprise, he finds empty roads as he drives towards the city until he crosses paths with a Spanish lady. After listening to her frantically begging for help in Leave the World Behind, Clay abandons her and drives away because he fails to understand what she tells him in Spanish. Little does he realize that the language barrier could cost him more than he could comprehend.

Related
Barack Obama's Leave The World Behind Role Explained: How The Former President Influenced Netflix's New Movie
Barack Obama played a key role in Leave the World Behind and influenced some of the most important parts of the thrilling film.

The Lady Was Telling Clay About The Red Pamphlets, Deer & Needing To Go Home

Although Clay grows increasingly anxious about helping the woman and understanding what she is trying to say, she gives him valuable insights into what is happening around them. She begins by thanking god that she found someone and telling Clay she is lost and wants to return home. She even tells him that she needs his phone, which is the only thing he seemingly understands. The Spanish Lady, Salvadora, also claims that Clay is the first person she has seen all day, and they must leave before it is too late.

While Clay thinks about abandoning her in the Netflix disaster movie, Salvadora mentions that she saw a plane dumping red smoke, likely referring to the drone that later dropped the red pamphlets on Clay's car. In addition, she tells him that she saw nearly 50 deer coming out of the forests and a military plane in the sky, and also asks if they were facing a chemical attack. While at it, she keeps pleading for help, saying she needs to get back home. Clay does not understand her but senses the desperation in her voice. However, he still abandons her in the middle of nowhere, losing the opportunity to do the morally correct thing and get new insights into all the weird happenings around them.

Leave the World Behind is available on Netflix for streaming.

Clay Leaving The Lady Behind Supported Leave The World Behind's Message About Humanity & Fear

Leave the World Behind Danny Kevin Bacon Porch

Before Clay runs into Salvadora, he struggles to tune his radio and turn on the GPS in his car. To his dismay, all the technologies he has grown accustomed to do not work. That is when he spots a human after not seeing anyone for a long time. Ideally, finding a human in his scenario could have greatly benefited him, given how Salvadora knew a lot more about the apocalypse than him. However, Clay fails to make the best out of his situation without Google Translate or relevant technological tools. He is a media studies professor who is so dependent on the media that he cannot function in a world where all digital media sources have been taken down.

This theme is reinstated in Leave the World Behind's ending arc, where Ethan Hawke's Clay begs Kevin Bacon's Danny for medicine. Like Salvadora, he desperately seeks help from Danny and even confesses that he is helpless in a world that lacks the comforts of technology. Meanwhile, like Clay previously ignored the lady's cries for help, Danny refuses to help Clay and his family, highlighting how humans have turned against one another in the face of danger. Fortunately, Leave the World Behind ends on an optimistic note where Danny helps Clay, Amanda and Ruth finally join forces, and Rosie gets to watch the finale of Friends.

    Adblock test (Why?)

    Thursday, December 14, 2023

    Perspective | My Mexican red rice is a translation of heritage through cooking - The Washington Post - Translation

    This column comes from the Eat Voraciously newsletter. Sign up here to get one weeknight dinner recipe, tips for substitutions, techniques and more in your inbox Monday through Thursday.

    I’m no stranger to translation. My mother tongue is not my dad’s, and my grandma and I communicate on the shaky ground of Spanglish, neither of us having ever arrived at the comfort of fluency. Many Latines (my family included!) might call me a “no sabo kid” – a young Latine that grew up speaking rudimentary Spanish, if any at all. Years of Spanish classes aided me a bit, but I’ve always relied on my dad to play interpreter.

    I’ve grappled with the consequences of not being fluent in my family’s language for many years. Some are tangible: Sitting across from my grandma and waiting for my dad to translate each side of our conversation; blushing when people notice my last name and try to switch languages; getting teased by one of my uncles when my British boyfriend somehow speaks clearer Spanish than I do. Others are felt, a combination of embarrassment, regret and humor. I’m accustomed to stumbling over words, the sharp vowels and double r’s cutting a tongue that has dulled over time.

    What has shocked me, though, is my lack of dexterity when cooking Mexican recipes – family recipes. Outside of my grandma’s kitchen and away from my dad’s cooking, dishes as common as rice and beans stumped me, never tasting like they did in my memory.

    Get the recipe: Arroz Rojo (Mexican Red Rice)

    The first time I attempted arroz rojo, or Mexican red rice, I lifted the lid of the pot to find a pale yellow, over-salted pile of mush. There’s this notion that’s perpetuated in both popular media and dominant discourses that Latinas carry their culture in their fingertips, embodying sensuality in all arenas of their lives, the kitchen included. Tasting my rice, I thought I might be a defective model. The failure felt like another veil placed between me and my family and my heritage.

    But even in the best, monolingual circumstances, I’ve learned that translating the taste of a dish is not a straightforward process. In the culinary epic “Small Fires,” Rebecca May Johnson uses classical reception studies to unpack how written recipes are translated “from the medium of language into the spattering physicality of ingredients.” Just as those who translate and adapt Homer’s “Odyssey” bring their own context to the source text, cooks also approach a singular dish or recipe in an infinite number of ways.

    When I began developing this recipe for arroz rojo, I realized that the challenge was not reading recipes in Spanish or asking my dad to translate my aunt or grandma’s advice – it was translating a recipe across generations, cultures and skill sets.

    Using a potent combination of memory and research, I began experimenting. I tried versions on both the stovetop and in the oven – my dad would seemingly choose his method at random. I noted a Reddit user’s advice passed down from their own grandmother to rinse the rice well, also known as enjuagar or “batir el arroz,” removing extra starch from the rice. I remembered my dad using caldo de pollo bouillon from a neon yellow box, a spoonful giving a dish salt and depth. I attempted a fresh tomato sauce, though I had seen the canned version in both recipes online and my family’s kitchen. And I did my best to avoid the spicy Russian roulette my dad played with his own rice when he added a whole jalapeño, seeds and pith included, to the blended sauce.

    My recipe is a translation of the concept of the dish arroz rojo, and a translation of all the versions of the dish that I’ve had the pleasure of tasting. The rice is both soaked and toasted, and the sauce of fresh tomato, garlic and onion gets an unconventional boost from vegetable bouillon and tomato paste. Instead of finely chopping a chile pepper or leaving it whole, I cut it in half, adding a suggestion of spice as the rice cooks. While mixing in frozen corn, carrots and peas is traditional in Mexican American kitchens, I sometimes add in red bell pepper and lima beans.

    I don’t know if my dad, grandma or aunt and uncles would recognize this recipe on paper. But I know that when I fluffed the final version, mixing in the thin red layer of sauce and unleashing the savory steam from the bottom, the scent was familiar. It tasted like fluency.

    A couple months ago, my dad visited me in Washington, D.C., giving me an excellent opportunity to pick his brain about Mexican cooking. I took him to the “¡Presente!” exhibit at the National Museum of American History, a chronicle of Latine history in the United States and its territories. In the middle of the exhibit, we sat in a small theater and watched a short film about what it means to be Latine. A woman around my age shared that because of language barriers, she and her grandmother had never had a full conversation of more than a few sentences. Hearing my own reality out loud presented an unexpectedly heartbreaking perspective that I’d never considered before.

    But as the day went on, we picked out a new Mexican cookbook for him to take home to Minnesota, and I showed him one of my favorite local Latin supermarkets. I thought of this recipe, and remembered that the act of translation isn’t always verbal. Sometimes it’s cooking a pot of rice. Sometimes it’s standing next to someone you love at the stove, filling each other’s plate, smiling in silence.

    Get the recipe: Arroz Rojo (Mexican Red Rice)

    Adblock test (Why?)

    Dictionary.com names its Word of the Year. It's probably not what you think it is. - Mashable - Dictionary

    Dictionary.com has announced their Word of the Year for 2023 and, in a move that should surprise few, it is related to the boom in artificial intelligence.

    The Word of the Year is "hallucinate." At first blush that might not seem AI-related. You might've guessed words like, you know, "artificial" or "AI" itself. But "hallucinate," as Dictionary.com explains, is a major word in the AI world and one the site chose with a purpose.

    As Dictionary.com defines it, in AI terms, hallucinate means "to produce false information contrary to the intent of the user and present it as if true and factual."

    SEE ALSO: The ultimate AI glossary to help you navigate our changing world

    In a year that AI went mainstream, hallucinate stood out as a particularly important word. Dictionary.com noted it saw a 46 percent increase in lookups in 2023 and an 85 percent uptick in media usage.

    "Hallucinate is particularly notable among the terms that AI has popularized because it refers not to an aspect of how AI functions but to one of the ways it can malfunction," Dictionary.com wrote in a statement announcing the Word of the Year. "In this way, it’s akin to other cautionary tech terms, like spam and virus, that are now entrenched in our language.

    This is just one of the reasons that our lexicographers expect the word to stay relevant—at least into the near future."

    SEE ALSO: 5 ways AI changed the internet in 2023

    For better or worse, we're all going to be learning and using AI-related terms for the foreseeable future. Mashable's Cecily Mauran, in fact, wrote a comprehensive glossary of all the AI terms you need to know. Among the words in the glossary: hallucination. As Mauran notes, some folks might think AI is all-knowing and super-capable, but the fact that this term exists proves otherwise.

    Wrote Mauran: "[Hallucination] happens because generative AI models work by predicting words based on probabilistic relation to the previous word. It isn't capable of understanding what it's generating. Let that be a reminder that ChatGPT might act sentient, but it's not."

    Topics Artificial Intelligence

    Adblock test (Why?)

    Dictionary Trolls Draymond Green Amid Indefinite Suspension - Sports Illustrated - Dictionary

    [unable to retrieve full-text content]

    Dictionary Trolls Draymond Green Amid Indefinite Suspension  Sports Illustrated

    Dictionary.com’s word of the year is a common one. But it doesn’t mean what you think - CNN - Dictionary

    CNN  — 

    “Hallucinate” is Dictionary.com’s word of the year — and no, you’re not imagining things.

    The online reference site said in an announcement Tuesday that this year’s pick refers to a specific definition of the term pertaining to artificial intelligence: “to produce false information contrary to the intent of the user and present it as if true and factual.” In other words, it’s when chatbots and other AI tools confidently make stuff up.

    Grant Barrett, head of lexicography at Dictionary.com, told CNN this particular definition of “hallucinate” was added to the site earlier this year, though its use in computer science dates at least as far back as 1971. As staff at the online dictionary considered contenders for the defining words of 2023, Barrett said it became clear that AI was increasingly changing our lives, working its way into our language as well.

    “When we looked at the different words associated with artificial intelligence, we saw that ‘hallucinate’ really encapsulated this notion that AI wasn’t exactly what we as a culture wanted it to be,” Barrett said.

    Barrett and Dictionary.com senior editor Nick Norlen wrote in a blog post that the site saw a 46% increase in lookups for “hallucinate” over the previous year, while its use in digital publications increased 85% year-over-year. The online reference also reported an average 62% increase in year-over-year lookups for other AI-related terminology, such as “chatbot,” “GPT” and “generative AI.”

    Indeed, 2023 was a major year for AI — from impressive developments in the technology to contentious debates about its promises and pitfalls.

    Since the launch of ChatGPT last year, people have used AI-powered tools to write essays, research papers, legal briefs and emails, with varying results. But as some in the tech world point to AI’s potential for productivity, others across sectors are concerned it could eliminate millions of jobs, reinforce racist and sexist biases and sow misinformation. The use of AI in the film and television industries was one of the issues at the heart of the Hollywood writers’ strike earlier this year, and leaders in the US and Europe are already pushing for regulations around the technology.

    While the word “hallucinate” as it pertains to AI becomes more mainstream, some AI researchers criticize its use in this context. As CNN’s Catherine Thorbecke reported in August, some experts argue that the term anthropomorphizes AI, ascribing ill intent to language learning models that are actually trained on datasets influenced by humans.

    But as the lexicographers at Dictionary.com see it, the word is here to stay.

    “Our choice of hallucinate as the 2023 Word of the Year represents our confident projection that AI will prove to be one of the most consequential developments of our lifetime,” Norlen and Barrett wrote in the blog post.

    “Data and lexicographical considerations aside, hallucinate seems fitting for a time in history in which new technologies can feel like the stuff of dreams or fiction —especially when they produce fictions of their own.”

    Merriam-Webster, which recently announced “authentic” as its 2023 word of the year, also cited the rise of AI in its selection. The term saw a substantial increase in lookups thanks to “stories and conversations about AI, celebrity culture, identity, and social media,” the online dictionary said.

    Other words that made Dictionary.com’s word of the year shortlist were indicted, rizz, strike, wildfire and wokeism.

    Adblock test (Why?)