Friday, August 25, 2023

What Doc Holliday Says To Johnny In Latin? Tombstone Scene Translation Explained - Screen Rant - Translation

[unable to retrieve full-text content]

What Doc Holliday Says To Johnny In Latin? Tombstone Scene Translation Explained  Screen Rant

20,000 words included in new dictionary of Shakespeare's English - Medievalists.net - Dictionary

The Arden Encyclopedia of Shakespeare’s Language, published this week, aims to be the ‘first fully corpus-based dictionary of Shakespeare’s language and most comprehensive since Alexander Schmidt’s in the early 1870s.’

William Shakespeare used the word dotage to capture reduced mental ability (as in being blindly in love) rather than as a quaint term for old age, successes were really outcomes – one could talk of a ‘bad success’ – and, it turns out, the word bastard back then most often referred to a flower that was genetically hybrid.

Advertisement

While dinner was preferred by Shakespeare for what we might think of as lunch (although his contemporaries used it to refer to an evening meal), beef, as today, was strongly associated with the English, but particularly the lower ranks (it was thought to reduce intelligence). And while fish was not only considered inferior to red meat, it was also considered to be ‘decidedly dodgy’, being associated with Catholicism or sex.

This new research by Lancaster University sheds light on the times with the publication of The Arden Encyclopedia of Shakespeare’s Language, published by Bloomsbury earlier this week. Its publication comes after 25 years of preparation, a £1 million Arts and Humanities Research Council grant, a team of up to 25 researchers, and seven years of hard work.

Advertisement

The project, conceived and led by Jonathan Culpeper, a Professor of English Language and Linguistics at Lancaster University, will result in a unique 5 volume reference work, detailing and illuminating Shakespeare’s rich language. A key feature of the project is that is uses corpus linguistics, the computer-aided analysis of massive datasets of language, to provide evidence-based accounts of Shakespeare’s language.

And not just of Shakespeare’s words. The volumes of the Encyclopedia will also reveal the linguistic thumbprints of plays and characters plays, the articulation of themes such as love and death, and the networks of character interaction. Professor Culpeper, who worked together with Dr Andrew Hardie and Dr Jane Demmen, also from Lancaster University, on these volumes, explains “This is the first fully corpus-based dictionary of Shakespeare’s language and most comprehensive since Alexander Schmidt’s in the early 1870s.”

This month sees the publication of the first two volumes, which together constitute a dictionary. Volumes 1 and 2 comprise 20,000 word-entries gleaned from a million-word corpus of Shakespeare’s plays and compared with a matching million-word corpus of contemporary plays, along with huge corpus of 320 million words of various writings of the period.

“So why the comparisons?” asks Professor Culpeper. “Other dictionaries define Shakespeare by looking just at Shakespeare. The result is a bit circular – Shakespeare’s words had lives amongst his contemporaries, and we pay attention to that, along with what they are doing in Shakespeare’s plays.”

Advertisement

It is obvious perhaps that wicked occurs densely in religious texts of the time, but who would have guessed that of the highly frequent word ourselves? Frequent words such as alas or ah are revealed to be heavily used by female characters, doing the emotional work of lamentation in the plays (especially histories).

“Frequent words,” Professor Culpeper comments, “often excluded from previous Shakespearean dictionaries, have a wood for the trees problem.”

The dictionary also surveys the infrequent, flagging words that occur but once in Shakespeare, such as bone-ache (syphilis) or ear-kissing (whispering, though other writers used it for flattering), and words that seem to have their earliest occurrence in Shakespeare (including, the decidedly modern-sounding self-harming).

Advertisement

The Encyclopedia is written for a general audience. The remaining volumes will be published over the next three years. To learn more, please visit the publisher’s website or buy this set on Amazon.com.

Top Image: Work on a new ‘verbal treasure trove’ captures nuances and uses of Shakespeare’s words. Phot courtesy Lancaster University, UK

Advertisement

Adblock test (Why?)

Meta releases SeamlessM4T AI model for text and speech translation - Mashable - Translation

Meta's latest AI output is a major advancement for real-time text and speech translation.

On Tuesday, the company released SeamlessM4T: a multimodal model that translates text to speech and vice versa. Meta claims SeamlessM4T is "the first all-in-one multilingual multimodal AI translation and transcription model," meaning it is uniquely able to translate and transcribe languages at the same time. SeamlessM4T can translate speech-to-text, speech-to-speech, text-to-speech, and text-to-text inputs for up to 100 languages. Translations for speech-to-speech and text-to-speech translations outputs support 35 languages.

SEE ALSO: A giant online book collection Meta used to train its AI is gone over copyright issues

Like other AI models recently released by Meta, including Llama 2 and AudioCraft, SeamlessM4T is publicly available for researchers and developers with a research license. Alongside the model, Meta is also releasing its training dataset called SeamlessAlign, which has 270,000 hours of speech and text alignments. Unlike OpenAI and Google, Meta has made a point of making its models open-source and publicly available. Meta's approach of launching open-source models has the dual effect of enabling developers to build and improve the products, while also winning points amongst AI ethicists who are calling for transparency of generative AI systems.

Meta's open-source approach may seem altruistic, but it's a strategic power move in a ruthlessly competitive market against other big tech companies developing AI. There's also the issue of data collection that all AI models must contend with. According to the blog post, SeamlessM4T's dataset (SeamlessAlign) consists of publicly available data, there are ethical and legal issues surrounding use of copyrighted works and personal data without consent.

Meta's announcement didn't detail specific plans for SeamlessM4T, only hinting that it wants "to explore how this foundational model can enable new communication capabilities." In other words, we might someday see a consumer-facing version of SeamlessM4T on WhatsApp or Instagram.

Topics Artificial Intelligence Meta

Adblock test (Why?)

Thursday, August 24, 2023

Meta releases AI model for translating speech between dozens of languages - Reuters - Translation

NEW YORK, Aug 22 (Reuters) - Facebook parent company Meta Platforms (META.O) on Tuesday released an AI model capable of translating and transcribing speech in dozens of languages, a potential building-block for tools enabling real-time communication across language divides.

The company said in a blog post that its SeamlessM4T model could support translations between text and speech in nearly 100 languages, as well as full speech-to-speech translation for 35 languages, combining technology that was previously available only in separate models.

CEO Mark Zuckerberg has said he envisions such tools facilitating interactions between users from around the globe in the metaverse, the set of interconnected virtual worlds on which he is betting the company's future.

Meta is making the model available to the public for non-commercial use, the blog post said.

The world's biggest social media company has released a flurry of mostly free AI models this year, including a large language model called Llama that poses a serious challenge to proprietary models sold by Microsoft-backed (MSFT.O) OpenAI and Alphabet's (GOOGL.O) Google.

Zuckerberg says an open AI ecosystem works to Meta's advantage, as the company has more to gain by effectively crowd-sourcing the creation of consumer-facing tools for its social platforms than by charging for access to the models.

Nonetheless, Meta faces similar legal questions as the rest of the industry around the training data ingested to create its models.

In July, comedian Sarah Silverman and two other authors filed copyright infringement lawsuits against both Meta and OpenAI, accusing the companies of using their books as training data without permission.

For the SeamlessM4T model, Meta researchers said in a research paper that they gathered audio training data from 4 million hours of "raw audio originating from a publicly available repository of crawled web data," without specifying which repository.

A Meta spokesperson did not respond to questions on the provenance of the audio data.

Text data came from datasets created last year that pulled content from Wikipedia and associated websites, the research paper said.

Reporting by Katie Paul, Editing by Rosalba O'Brien

Our Standards: The Thomson Reuters Trust Principles.

Acquire Licensing Rights, opens new tab

Adblock test (Why?)

Wednesday, August 23, 2023

Austin elementary school teacher surprised with dictionaries for all of his students - KEYE TV CBS Austin - Dictionary

Wed, 23 Aug 2023 16:19:13 GMT (1692807553647)

03eacebf9eebf86521cada1cc82482d630bf4046

987079a68c49a092641e253415432e4dfc51b5a6

site logosite logo

Now

85

Thu

105

Fri

106

CBS logo
Close Alert
(PHOTO: Harmony Public Schools)
(PHOTO: Harmony Public Schools)
Image icon

3

VIEW ALL PHOTOS

View All Photos

(PHOTO: Harmony Public Schools)

Facebook Share IconTwitter Share IconEmail Share Icon

Loading ...

Adblock test (Why?)

Navigating linguistical currents: The word that broke the Dictionary ... - The Dickinson Press - Dictionary

Hey there, readers of the Verbal Versatility Press! Buckle up, because today we're taking a linguistic roller coaster ride that'll make your thesaurus spin faster than a DJ's turntable. Our tale involves a head-spinning phone call from a subscriber who read us the riot act, politely albeit harshly, on our usage of a specific word in a headline.

What caused their linguistic anger, you ask? Well, let's just say they weren't too keen on the "I" word — you know the one… inclusivity.

Picture this. The sun is shining, birds are chirping and Dickinson Public School decides to jazz up its expansion and renovation plans for the new High School to be in tune with the ever-sensible Americans with Disabilities Act (ADA). We even put out an article that elegantly explained the details of their endeavor — a bond and plan that we at The Press support and stand behind, but that’s next week’s column.

But ah, the word that 'shall not be used' garnered the ire of said subscriber. They actually touted the article as well-written and informative, but the Sauron of evils was in the headline. You know, that thing that everyone reads at the top of the story without ever reading the article most of the time. Well, this headline drew their wrath because it dared to mention the dreaded "I" word.

Inclusivity, dear readers, that seemingly innocuous term, has apparently been snatched by the "far-left" and taken on a life of its own, like a rebellious teenager refusing to follow curfew. Our caller insisted that the word had lost its way, becoming tangled in the barbed wire of gender, race and sexuality of the progressive movement. To them, it was like using "rad" to describe an amazing skateboard trick, when "rad" is actually short for "radical" — they were convinced it was a gnarly and twisted mistake…or the spawn of “liberal” media.

ADVERTISEMENT

Now, let’s pause to ruminate on this. I’ll make the coffee, and be right back.

It's a peculiar quirk of human nature to attach new meanings to familiar words. Just because some folks decided in the '90s that "bad" could mean "good," doesn't mean we're gonna break out the air horns and send out an all-points-bulletin to our newsroom demanding we stop using the original "bad" in our stories, right?

Imagine a newspaper article with a headline that reads, "City Commission enacts Bad policy," followed by a story about some really cool thing they did that was cognizant of taxpayer dollars and was a clear benefit to the community. Let's just say it'd be a total flop with that headline, not to mention that I’m sure commissioners would be reaching out about the headline too.

But hey, words are slippery creatures. They're like jellyfish at a beach party — stingy if you're not careful, yet pretty awesome when you embrace their true essence. You see, words come with baggage but, it’s all about context. If we replaced every word, every time it got co-opted by politics, we’d have newspapers that read like Haiku poetry.

“Handcuffed, silent man,
Drugs found, freedom slips away,
Choices led astray.”

I thought long and hard about even addressing this issue to be honest. I mean, we get angry phone calls all the time about all sorts of issues, and believe it or not I don’t write about them. But here I felt it important to take a moment to marvel at the current state of affairs in the good ol' U.S. of A.

Everything, from granola bars and NASA to beer and pillows has somehow become an arena for political sparring. Can I even use the word “sparring,” or is that ableist nowadays? I digress.

Let's toast to the power of words — their history, their evolution and their remarkable ability to keep us all on our toes. Inclusivity, dear readers, isn't just about fitting all the cool kids into the same clubhouse. It’s also about letting words be words, embracing their original meanings while acknowledging their quirky, modern twists. After all, language isn't just a tool; it's the epicenter of human connection, even when it feels like a linguistic minefield at times.

ADVERTISEMENT

So, as I sip my coffee and savor the strange beauty of it all, let's remember that inclusivity in language is just as important as in society itself. And if the Word Police come knocking at our headlines, ready to read us our etymological rights, we'll just smile, hand them a dictionary and say, "Chill bruh, you doin’ too much fam, like no cap.”

Words change, but we're keeping it old-school, newspaper cool, here at The Dickinson Press. Why? Cause it's bad… but in a good way.

Editor's Note: On a more serious note, we do take complaints into consideration on all input. Readers are of immense value to us and your opinions are needed. But in the old adage, “You’ll probably win more bees with honey.” Thank you Dickinson for the ear and patience with us here at The Dickinson Press.

James B. Miller, Jr.
Opinion by James B. Miller, Jr.

James B. Miller, Jr. is the Editor of The Dickinson Press in Dickinson, North Dakota. He strives to bring community-driven, professional and hyper-local focused news coverage of southwest North Dakota.

Adblock test (Why?)

Meta releases an AI model that can transcribe and translate close to 100 languages - TechCrunch - Translation

In its quest to develop AI that can understand a range of different dialects, Meta has created an AI model, SeamlessM4T, that can translate and transcribe close to 100 languages across text and speech.

Available in open source along with SeamlessAlign, a new translation dataset, Meta claims that SeamlessM4T represents a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.

“Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta writes in a blog post shared with TechCrunch. “SeamlessM4T implicitly recognizes the source languages without the need for a separate language identification model.”

SeamlessM4T is something of a spiritual successor to Meta’s No Language Left Behind, a text-to-text machine translation model, and Universal Speech Translator, one of the few direct speech-to-speech translation systems to support the Hokkien language. And it builds on Massively Multilingual Speech, Meta’s framework that provides speech recognition, language identification and speech synthesis tech across more than 1,100 languages.

Meta isn’t the only one investing resources in developing sophisticated AI translation and transcription tools.

Beyond the wealth of commercial services and open source models already available from Amazon, Microsoft, OpenAI and a number of startups, Google is creating what it calls the Universal Speech Model, a part of the tech giant’s larger effort to build a model that can understand the world’s 1,000 most-spoken languages. Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collections of voices for training automatic speech recognition algorithms.

But SeamlessM4T is among the more ambitious efforts to date to combine translation and transcription capabilities into a single model.

In developing it, Meta says that it scraped publicly available text (in the order of “tens of billions” of sentences) and speech (4 million hours) from the web. In an interview with TechCrunch, Juan Pino, a research scientist at Meta’s AI research division and a contributor on the project, wouldn’t reveal the exact sources of the data, saying only that there was “a variety” of them.

Not every content creator agrees with the practice of leveraging public data to train models that could be used commercially. Some have filed lawsuits against companies building AI tools on top of publicly available data, arguing that the vendors should be compelled to provide credit if not compensation — and clear ways to opt out.

But Meta claims that the data it mined — which might contain personally identifiable information, the company admits — wasn’t copyrighted and came primarily from open source or licensed sources.

Whatever the case, Meta used the scraped text and speech to create the training dataset for SeamlessM4T, called SeamlessAlign. Researchers aligned 443,000 hours of speech with texts and created 29,000 hours of “speech-to-speech” alignments, which “taught” SeamlessM4T how to transcribe speech to text, translate text, generate speech from text and even translate words spoken in one language into words in another language.

Meta claims that on an internal benchmark, SeamlessM4T performed better against background noises and “speaker variations” in speech-to-text tasks compared to the current state-of-the-art speech transcription model. It attributes this to the rich combination of speech and text data in the training dataset, which Meta believes gives SeamlessM4T a leg up over speech-only and text-only models.

“With state-of-the-art results, we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems,” Meta wrote in the blog post.

But one wonders what biases the model might contain.

A recent piece in The Conversation points out the many flaws in AI-powered translation, including different forms of gender bias. For example, Google Translate once presupposed that doctors were male while nurses were female in certain languages, while Bing’s translator translated phrases like “the table is soft” as the feminine “die Tabelle” in German, which refers to a table of figures.

Speech recognition algorithms, too, often contain biases. A study published in The Proceedings of the National Academy of Sciences showed that speech recognition systems from leading companies were twice as likely to incorrectly transcribe audio from Black speakers as opposed to white speakers.

Unsurprisingly, SeamlessM4T isn’t unique in this regard.

In a whitepaper published alongside the blog post, Meta reveals that the model “overgeneralizes to masculine forms when translating from neutral terms” and performs better when translating from the masculine reference (e.g. nouns like “he” in English) for most languages.

Moreover, in the absence of gender information, SeamlessM4T prefers translating the masculine form about 10% of the time — perhaps due to an “overrepresentation of masculine lexica” in the training data, Meta speculates.

Meta makes the case that SeamlessM4T doesn’t add an outsize amount of toxic text in its translations, a common problem with translation and generative AI text models at large. But it’s not perfect. In some languages, like Bengali and Kyrgyz, SeamlessM4T makes more toxic translations — that is to say, hateful or profane translations — about socioeconomic status and culture. And in general, SeamlessM4T is more toxic in translations dealing with sexual orientation and religion.

Meta notes that the public demo for SeamlessM4T contains a filter for toxicity in inputted speech as well as a filter for potentially toxic outputted speech. That filter’s not present by default in the open source release of the model, however.

The larger issue with AI translators not addressed in the whitepaper is the loss of lexical richness that can result from their overuse. Unlike AI, human interpreters make choices unique to them when translating one language into another. They might explicate, normalize or condense and summarize, creating fingerprints known informally as “translationese.” AI systems might generate more “accurate” translations, but those translations could be coming at the expense of translation variety and diversity.

That’s probably why Meta advises against using SeamlessM4T for long-form translation and certified translations, like those recognized by government agencies and translation authorities. Meta also discourages deploying SeamlessM4T for medical or legal purposes — presumably an attempt to cover its bases in the event of a mistranslation.

That’s wise; there have been at least a few of instances where AI mistranslations have resulted in law enforcement mistakes. In September 2012, police erroneously confronted a Kurdish man for financing terrorism because of a mistranslated text message. And in 2017, a cop in Kansas used Google Translate to ask a Spanish-speaker if they could search their car for drugs, but because the translation was inaccurate, the driver didn’t fully understand what he’d agreed to and the case was eventually thrown out.

“This single system approach reduces errors and delays, increasing the efficiency and quality of the translation process, bringing us closer to making seamless translation possible,” Pino said. “In the future, we want to explore how this foundational model can enable new communication capabilities — ultimately bringing us closer to a world where everyone can be understood.”

Let’s hope humans aren’t left completely out of the loop in that future.

Adblock test (Why?)