Friday, March 25, 2022

How to Use the Dictionary in Google Docs - MUO - MakeUseOf - Dictionary

Have you ever been writing, only to discover you forgot the correct spelling of a word? Or, maybe you want to find a synonym to add some flair to a commonly used word. A dictionary tool can definitely help.

However, switching out of your document to perform a Google search or retrieve your dusty dictionary from the shelf can take your focus off your work. The built-in dictionary inside Google Docs helps keep you focused.

How to Use the Google Docs Dictionary

Google Docs comes standard with a ton of useful tools for document creation. For example, you can easily find images using the web search tool and even using drawing tools to spruce up your document.

However, the dictionary tool is one of our favorites. You can use it to look up definitions, find synonyms, figure out the spelling of a word, and more. Plus, the dictionary tool is super simple to use by following these steps:

  1. In your Google document, locate the toolbar at the top of your screen and select Tools.
  2. From the menu of options, select Dictionary. A window will appear to the right of your screen.
  3. Enter your search word in the search box next to the magnifying glass icon. Then, hit Return or Enter.
  4. Google Docs will show the definition of the word as well as applicable antonyms and synonyms.

If you want to dig deeper, go ahead and click on any of the hyperlinked words to see their definitions and details.

Image shows the dictionary inside Google Docs

There you have it! It really is as simple as that.

Sometimes, creating high-quality work calls for much more than a simple dictionary tool. And if you need more help, Google Docs offers so many other tools and add-ons that you're sure to find what you're looking for.

A person using Google Docs in MacBook
The 10 Best Google Docs Add-Ons for More Professional Documents
About The Author

Adblock test (Why?)

Thursday, March 24, 2022

Wednesday, March 23, 2022

Bourbon Dictionary w/Taylor Calandro 3-23-2022 - 1045 ESPN - 104.5 ESPN - Dictionary

Murray St. Basketball Radio Color Analyst, Kenny Roth  joins Matt to start hour three. Roth talks new LSU head coach Matt McMahon’s style as a basketball coach. We continue to recap Brian Kelly’s spring preview. Matt previews LSU-La Tech in baseball. We wrap the show with Otter Locks and What We Learned.

Adblock test (Why?)

Spring 2022 New Releases: In Translation - Book Riot - Translation

The days are getting longer and spring is in the air. Admittedly I’m writing this in the midst of another snowstorm in New England and it doesn’t feel anything like spring, but supposedly it’s coming. And while I wait for better weather, I can enjoy the spring 2022 new releases in translation. There’s something for everyone this season, with exciting debuts, thoughtful nonfiction, stunning poetry collections, and so much more. Readers will be particularly excited to see new titles from favorite authors like Olga Tokarczuk, Elena Ferrante, and Yūko Tsushima, and beloved translators like Jennifer Croft, Ann Goldstein, and Geraldine Harcourt.

I’ve poured over the catalogs and galleys and highlighted just some of the best spring 2022 new releases in translation, and because there’s just so much to choose from, I’ve added notes for others you should seek out as well! Looking over the lists I noticed there was even more incredible literature translated from Spanish this season than usual, more than I could fit into this list, so if you need just a few more suggestions check out The Wonders by Elena Medel, translated by Lizzie Davis and Thomas Bunstead, Linea Nigra: An Essay on Pregnancy and Earthquakes by Jazmina Barrera, translated by Christina MacSweeney, and Portrait of an Unknown Lady by Maria Gainza, translated by Thomas Bunstead.

Best Spring 2022 New Books In Translation

The Complete Short Stories of Malika Moustadraf cover

Blood Feast: The Complete Short Stories of Malika Moustadraf Translated by Alice Guthrie

Malika Moustadraf is a feminist icon in contemporary Moroccan literature but she’s not well known outside of the country. Blood Feast reckons with this loss, bringing together a complete collection of her vivid and compelling short stories ― on gender, sexuality, class, illness, and more. Moustadraf is a brilliant observer and thinker and her short stories are razor-sharp and endlessly thrilling. I’m especially grateful for translator Alice Guthrie’s extensive and nuanced translator’s note and all of the Moroccan people she credits with this important work of literary recovery. (Feminist Press, February 8)

And don’t miss Violets by Kyung-Sook Shin, translated by Anton Hur. (Feminist Press, April 22)

Tender by Ariana Harwicz cover

Tender by Ariana Harwicz, Translated by Annie McDermott and Carolina Orloff

Motherhood, womanhood, lust, death, madness. There’s a reason so many readers, myself included, are obsessed with Ariana Harwicz’s dark and relentlessly good writing. Harwicz is one of the most radical figures in contemporary literature, often compared to Nathalie Sarraute, Virginia Woolf, and Sylvia Plath. Tender is the third and final book in her “Involuntary Trilogy” after Die, My Love and Feebleminded, and it finds us again in the French countryside, this time following Harwicz’s unnamed narrator’s complex and destructive relationship with her teenage son. (Charco Press, February 15)

There’s no way I’d ever be able to pick just one more Charco Press title to recommend, so do yourself a favor and buy a subscription.

In the Margins by Elena Ferrante cover

In the Margins: On the Pleasures of Reading and Writing by Elena Ferrante, Translated by Ann Goldstein

In The Margins collects four new essays by Elena Ferrante, author of the Neapolitan Novels, and most recently The Lying Life of Adults. In these new essays, Ferrante writes about her literary influences and her beginnings as a reader and a writer. She discusses the work of artists she’s drawn to, including Emily Dickinson, Gertrude Stein, and Ingeborg Bachmann, among others. Thoughtful and engaging, these essays are another fascinating glimpse into Ferrante’s art and mind. (Europa Editions, March 15)

And don’t miss All the Lovers in the Night by Mieko Kawakami, translated by Sam Bett and David Boyd ― especially for fans of Kawakami’s debut novel Breasts and Eggs. (Europa Editions, May 3)

You Can Be The Last Leaf Selected Poems cover

You Can Be the Last Leaf: Selected Poems by Maya Abu Al-Hayyat, Translated by Fady Joudah

Maya Abu Al-Hayyat is the director of the Palestine Writing Workshop and author of four novels, many children’s books, and four poetry collections. You Can Be the Last Leaf is her first collection to be published in English, translated by acclaimed poet Fady Joudah. It includes poems from her four collections published over two decades, allowing readers to witness the breadth of her talents. As Joudah writes in his foreword, “the multifarious Palestinian voice lives on in [her] words, ordinary as grief and daily as laughter.” And there is so much grief and laughter in this collection, loss and love, as we watch the poet over time in an unending occupation. This unceasing violence seeps into her interior world too, her home and mind. But she still fiercely demands space for desire, laughter, and hope.(Milkweed Editions, May 10)

And don’t miss The Life and Death of a Minke Whale in the Amazon: Dispatches from the Brazilian Rainforest by Fábio Zuker, translated by Ezra E. Fitz. (Milkweed Editions, May 10)

cover of The Books of Jacob by Olga Tokarczuk

The Books of Jacob by Olga Tokarczuk, translated by Jennifer Croft

First published in Poland in 2014, The Books of Jacob has long been discussed as one of the Nobel Prize winning author Olga Tokarczuk’s most important and ambitious novels. In fact, the Nobel Prize committee described it as her magnum opus. And now, thanks to Booker International Prize–winning translator Jennifer Croft, it’s available in English. Set in mid-18th century Europe and based on historical figures and events, the novel follows Jacob Frank, a charismatic self-proclaimed messiah, and his followers. It’s next to impossible to capture this vast and expansive epic in a few words but I’d encourage everyone to read this clever, funny, and unimaginably rich work for themselves. (Riverhead, February 1)

Woman Running In The Mountains by Yuko Tsushima cover

Woman Running In the Mountains by Yuko Tsushima, Translated by Geraldine Harcourt

Yūko Tsushima is considered one of the most important Japanese writers of her generation, known for stories that center women’s lives. I’ve always known and loved her for her painfully beautiful novel Territory of Light, which follows a woman starting her life over again with her young daughter after being left by her husband. The translation by Geraldine Harcourt is particularly exquisite and I was thrilled to discover that this early work would be published. Set in 1970s Japan, Woman Running In the Mountains is another story of a young, single mother striving to find her place in the world. It’s an equally bracing novel of single parenthood but with an expansiveness and shimmering beauty that ultimately feels like a powerful act of defiance. (NYRB Classics, February 22)

cover of Jawbone by Mónica Ojeda

Jawbone by Mónica Ojeda, Translated by Sarah Booker

Ecuadorian writer Mónica Ojeda was included on the Bógota39 list of the best 39 Latin American writers under 40 in 2017, and in 2019 she received the Prince Claus Next Generation Award. Jawbone is her English-language debut and it follows Fernanda and Annelise, two inseparably close friends at an elite Catholic school that become ever more involved in the occult with their school friends. “It’s only fun if it’s dangerous” says Annelise, perfectly capturing the reading experience of this chilling nightmare of girlhood and adolescence, full of body horror, pleasure, and pain. (Coffee House, February 8)

And don’t miss When Women Kill by Alia Trabucco Zerán, translated by Sophie Hughes. (Coffee House, April 5)

Cover of This Is Us Losing Count: Eight Russian Poets

This Is Us Losing Count: Eight Russian Poets by Alla Gorbunova, Irina Kotova & Others, Translated by Elina Alter & Others

I’ve loved the Calico series from Two Lines Press since its inception. The series presents vanguard works of translated literature in strikingly designed ― and eminently collectible ― editions. This stunning bilingual collection features eight contemporary Russian poets and seven translators. I was struck by the range of voices in the collection, diverse in age, style, and from all over Russia ― some are overtly political, queer, and feminist, while others are more quietly subversive. Through each distinctive section of the collection there is the through line of memory and time, of past and present, and ultimately of the future. This Is Us Losing Count is a fascinating glimpse into modern Russian poetry that leaves me longing for more. (Two Lines Press, March 8)

Looking for even more great recommendations? Check out these 24 Must-Read 2022 Books In Translation.

Adblock test (Why?)

Best English to Dutch dictionary - FOX21News.com - Dictionary

[unable to retrieve full-text content]

Best English to Dutch dictionary  FOX21News.com

Tuesday, March 22, 2022

Tunic: Language Translation Guide - GameRant - Translation

As players make their way through Tunic, they will encounter some amount of writing that they can immediately decipher. However, much of the game's text is seemingly unreadable, as it is written in a unique language with its own characters. It is actually possible to translate the language in Tunic, though, and this guide will detail several resources that fans can use to do exactly that.

Tunic: Language Translation

To start simply, Tunic's language is composed of a variety of characters that represent sounds. While these characters occasionally correspond to the sound of a single English letter, they typically represent consonant and vowel combinations. Furthermore, the language's characters can be connected to one another to form words, and that connection is represented by a horizontal line that runs through the characters.

RELATED: Tunic: All Ability Cards

With the basic structure of the language in this game from developer Andrew Shouldice established, it is now necessary to look at its characters more closely. In essence, every character in Tunic's language is designed around a single shape, which is something like a hexagon with a couple of internal points. The exterior angles and internal points of this shape can then be connected in various configurations to create characters that are related to specific sound combinations.

Fortunately, indie game fans need not start from scratch to determine how sounds and characters relate to one another, as there are a couple of resources that can be used to bypass that preliminary step. The first of those resources is the chart in the following Reddit post, which was created by user oposdeo and showcases exactly how rendered connections translate to readable sounds. As previously noted, a single character will typically have connections for both a vowel and consonant sound, and a circle below the character indicates that the positions of the vowel and consonant sound should be swapped when reading it.

Alternatively, players can use this Tunic language translation tool, which was created by Reddit user Scylithe, to draw a character and see its associated sound. To perform this drawing, players should click and drag over the connections that are rendered in the character that they are trying to translate, and the vertical line at the bottom should be filled in when the aforementioned circle appears below the character.

While fans of isometric games should now have the tools that they need to decipher the language in Tunic, actually performing that translation can be quite daunting. Luckily, Reddit users skititlez and RioxAA have uploaded a translated version of Tunic's instruction book for everyone to enjoy. This leaves just the non-manual text for fans to figure out, which is a much more manageable endeavor.

Tunic is available now for PC, Xbox One, and Xbox Series X.

MORE: Tunic Offers an Accessible Alternative to FromSoft's Catalog

amouranth twitch ban youtube video
Amouranth Returns to Twitch After 3 Day Ban
About The Author

Adblock test (Why?)

Microsoft claims new AI model architecture improves language translation - VentureBeat - Translation

Did you miss a session at the Data Summit? Watch On-Demand Here.


Coinciding with Nvidia’s March 2022 GPU Technology Conference, Microsoft today announced an update to Translator — its Azure service that can translate roughly 100 languages across call centers, chatbots, and third-party apps — that the company claims greatly improves the quality of Translator’s translations. Powered by a new family of AI models that can translate directly between certain languages, Microsoft says that an internal study found the translations to be up to 15% better compared with those generated by previous Translator models.

The models also power a new feature in Translator, multilingual document translation, that can translate documents containing text written in different languages.

Z-code Mixture of Experts

Powering Translator’s upgrades is Z-code, a part of Microsoft’s larger XYZ-code initiative to combine AI models for text, vision, audio, and language to create software that can speak, see, hear, and (hopefully) understand. The team comprises a group of scientists and engineers from Azure AI and the Project Turing research group, focusing on building multilingual, large-scale models that power various Microsoft products.

Z-code provides the framework, architecture, and models for AI-powered translation across language families. With Z-code, Microsoft says it’s using transfer learning — an AI technique that applies knowledge from one task to another, related task — to move beyond common languages, like English, and improve translation for the estimated 1,500 “low-resource” languages in the world.

Like all models, Microsoft’s learn from examples in large datasets sourced from a mixture of public and private archives (e.g., ebooks, websites such as Wikipedia, and hand-translated documents). Low-resource languages are generally defined as having under 1 million example sentences, which adds to the challenge of developing models; AI models usually perform better when given more examples.

Because many languages share linguistic elements, Microsoft develops Z-code models multilingually across different languages and that knowledge is transferred between languages. For example, a model’s translation skills might be used to improve its ability to understand natural (i.e., everyday) language.

Microsoft rolled out Z-code-powered enhancements to Translator last October, adding support for 12 new languages including Georgian, Tibetan, and Uyghur. Now, the company says that an improved version of Z-code — Z-code Mixture of Experts (MoE), which launched this week — can better understand “low-resourced” language nuances.

The AI models used in modern text translation, MoE or no, contain components called “neurons” that are organized into distinctive layers. Each neuron is a mathematical operation that plays a key role in how the model “learns” to interpret and translate languages. MoEs are made up of small clusters of neurons that are only active under special, specific circumstances. Lower layers extract certain “features” from the text to be translated — i.e., characteristics — and “experts” — i.e., clusters — are called upon to evaluate those features. For example, each expert cluster can learn to handle a separate part of speech or semantic or grammatical rule.

“Z-code MoE models are a promising way forward in the language domain since they are more efficient and need fewer systems to run. The same underlying model can be fine-tuned to perform different language understanding tasks such as translating between languages, summarizing a speech, offering ways to complete a sentence or generating suggested tweets, instead of having to develop separate models for each of those narrow purposes,” Xuedong Huang, chief technology officer at Microsoft’s Azure AI division, told VentureBeat via email. “While the Z-code MoE models learn universal representation, specific parts of the model can specialize in particular languages and linguistics characteristics to enable better translation.”

Compared with other model architectures, MoEs have some advantages. The experts can receive a mix of data, but only a few experts remain active at any one time, meaning that even a huge model needs only a small amount of processing power in order to develop or run. In fact, MoE is one of the few architectures demonstrated to scale to more than a trillion parameters. (Parameters are the part of the model that’s learned from example text data, and generally speaking — especially in language — the correlation between the number of parameters and sophistication has held up remarkably well.)

To illustrate, an MoE model containing 1.6 trillion parameters requires compute resources approximately equal to that of a 10 billion-parameter conventional model, by Microsoft’s estimation. The cost isn’t insubstantial, to be fair — a 2020 study from startup AI21 Labs pegged the expenses for developing a text-generating model with only 1.5 billion parameters at between $80,000 and $1.6 million. But it’s more efficient than other methods. Microsoft’s and Nvidia’s recently released Megatron 530B language model, which has 530 billion parameters, was originally developed across 560 Nvidia DGX A100 servers. A single DGX A100 starts at $199,000.

MoEs were first proposed in the ’90s, and research papers in recent years from companies including Google describe experiments with trillion-parameter-plus MoE language models. But Microsoft claims that Z-code MoE is the first MoE language model to reach production.

“Using an MoE approach allows us to achieve performance and quality benefits more efficiently, as it only engages a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request. This architecture allows massive scale in the number of model parameters while keeping the amount of compute constant,” Huang continued. “For our production model deployment, the training dataset was 5 billion parameter models, which are 80 times larger than Microsoft’s currently deployed models. The models are trained on 64 GPUs. A single MoE model can replace 20 of the current translation models, increasing efficiency of training MoE models while also improving translation accuracy.”

Future work

While Microsoft says that Z-code MoE has led to great strides in improving language translation, the problem isn’t solved. Not by a long shot.

Because of biases in public example text, non-English models continued to perform worse than their English-language counterparts. For example, languages in Wikipedia-based datasets vary not only by size but in the percentage of stubs without content, the number of edits, and the total number of users (because not all speakers of a language have access to Wikipedia). Beyond Wikipedia, ebooks in some languages, like Arabic and Urdu, are more commonly available as scanned images versus text, which requires processing with optical character recognition tools that can dip to as low as 70% accuracy.

A recent piece in The Conversation points out the other flaws in AI-powered translation, including different forms of gender bias. In certain languages, Google Translate once presupposed that doctors were male while nurses were female, while Bing’s translator translated phrases like “the table is soft” as the feminine “die Tabelle” in German (which refers a table of figures). Other translations miss the meaning of the original text entirely. In one study referenced by The Conversation, the headline “UK car industry in brace position ahead of Brexit deadline” was translated by an AI system as “L’industrie automobile britannique en position de force avant l’échéance du Brexit,” which implies that the U.K. car industry is in a position of strength as opposed to weakness.

“No matter how fluent the suggested translation appears, these types of errors (incorrect terminology, omissions, mistranslations) abound in machine translation output,” Guillaume Deneufbourg, a researcher in language sciences at the Université de Lille in Lille, France, wrote for The Conversation. “Another issue with machine translation which people may be less aware of is a process known as normalization. If new translations are only ever made using existing ones, over time, the process can stifle inventiveness, creativity, and originality.”

One study from Tilburg University and the University of Maryland referred to the normalization phenomenon as “translationese,” with the coauthors finding a quantifiable loss of “linguistic richness” in AI systems’ translations. While the study points out that that this might be desirable side effect if the goal is to simplify the translation, normalization becomes problematic when it prevents systems from making grammatically correct choices and reduces diversity in “morphologically richer” languages, like Spanish and French.

Microsoft says that it continues to develop new methods to improve translation, both through architectural improvements and techniques to mitigate bias in example data.

“Today’s machine learning models need huge translation data sets with dialects for training, and there may not be enough data for all the desired languages and dialects, particularly in smaller markets,” Huang added. “The ability to share knowledge across different languages enables Z-code to produce more accurate results for underrepresented languages that don’t have a huge number of translation examples to learn from. This will help improve AI fairness and ensure that high-quality translations are not restricted to languages with rich training resources only.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Adblock test (Why?)