Sunday, November 14, 2021

Oxford English Dictionary Makes "Vax" 2021's Word of the Year - The Great Courses Daily News - Dictionary

By Jonny Lupsha, Current Events Writer

The most impactful word in the English language this year is “vax”—no surprise there. “Vax,” a shortening of “vaccination,” has been named the Word of the Year by the publishers of the Oxford English Dictionary. It comes from “clipping” a word.

Woman using dictionary to study words
Photo By PHENPHAYOM / Shutterstock

Whether an individual is for or against the COVID-19 vaccination, there’s no doubt that the subject has permeated the everyday lexicon. In all likelihood, any one person has discussed the coronavirus vaccine as the pandemic has continued throughout 2021. Additionally, there’s a good chance we’ve used the shortened word “vax” instead of “vaccination.”

Due to its prominence in language for the year, it’s no surprise that the publishers of the Oxford English Dictionary have made “vax” their Word of the Year for 2021. In her video series The Secret Life of Words: English Words and Their Origins, Dr. Anne Curzan, Arthur F. Thurnau Professor of English at the University of Michigan, explained different types of shortened words.

So That’s What That’s Called

Initialisms are one common way of shortening terms, such as saying “laser” instead of lightwave amplification by stimulated emission of radar. Taking the first letter from each word and making it an intelligible acronym saves time. Another way of abbreviating words is clipping.

“Clipping is just what it sounds like: We’re clipping part of the word off; that could be the front or that could be the back,” Dr. Curzan said. “Off the front, we’d get something like ‘rents’ for parents or ‘do’ from hairdo. ‘Blog’ is a clipping from weblog.”

On the other hand, if we clip the back end of the word off, we get shortened words like “limo” from limousine, “rehab” from rehabilitation, and even “mob.” Surprisingly, mob is a clip of the Latin phrase “mobile vulgus,” meaning “a fickle crowd.” Additionally, sometimes we take the middle of words and leave off the front and the back, such as “fridge” from refrigerator.

“Some of these clippings probably feel slangy to you because they are, but some clippings are now entrenched in the language and no longer feel slangy or even like clippings,” Dr. Curzan said. “‘Flu’ is a clipping from influenza; ‘phone’ from telephone; deli, lab, dorm. These, to us, now feel, I think, well established in the lexicon.”

Back Formation

There are also what are called minor word formation processes. One kind is blending, also known as making a portmanteau word. For example, “smog” is a blend of the words smoke and fog. “Netiquette” is a portmanteau of “Net”—itself a clipping of Internet—and etiquette. Another minor word formation process is back formation.

“This is where we take a word in the language and we reanalyze the parts, and from that we create a new word,” Dr. Curzan said. “We borrowed into English the word ‘beggar’ in the 13th century. You can even see from the spelling, which ends with ‘-ar,’ that it’s not a verb plus a suffix.”

However, the word “beggar” sounds a lot like “baker.” So if a baker bakes, and a taker takes, then a beggar must beg. This line of thinking caused people to backform the word “beg” from “beggar.” Surprisingly, words like this are everywhere.

“English had the word ‘diagnosis’ before we had the verb ‘diagnose’—that was backformed from diagnosis. ‘Television’ gives us the verb ‘televise;’ ‘lazy’ gives us the verb ‘laze,'” Dr. Curzan said. “If you’re lazy, what are you doing? You must be lazing around.”

“Vax” and its derivatives are popular enough to have become Oxford English Dictionary’s Word of the Year 2021, following previous winners like selfie, unfriend, and toxic. Which word will top the charts in 2022?

Edited by Angela Shoemaker, The Great Courses Daily

Adblock test (Why?)

Google Document Translation Now Generally Available - InfoQ.com - Translation

Google Cloud recently announced the general availability of Document Translation, a new feature of Translation API Advanced that allows formatting of documents to be retained throughout the translation process.

Until now the translation of documents required that text was separated from the layout attributes, with the document’s structure either lost or recreated after the text translation. Sarah Weldon, product manager at Google, explains:

One of the biggest differentiators for Translation API Advanced’s document translation capabilities is the ability to do real-time, synchronous processing for a single file. For example, if you are translating a business document such as HR documentation, online translation provides flexibility for smaller files and provides faster results (...) Meanwhile, batch translation allows customers to translate multiple files into multiple languages in a single request.

The new service lets customers translate documents in over 100 languages and supports formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting. The GA adds right to left language support for PDFs, preservation of font size, font color, font style, and hyperlinks for native PDFs and introduces configurable endpoints to store machine translation processing in the European Union.

To improve the accuracy of the results, Document Translation now supports four different translation approaches: customers can rely on Google’s SOTA translation models, import glossaries for specific terms and phrases defining preferred translations, choose a pre-trained model or build custom translation models with AutoML.

In a separate article, Tristan Li, customer engineer at Google, and Wayne Davis, customer engineering manager at Google, highlight the best practices for translating websites with Translation API. Google is not the only cloud provider offering API for document translation. As recently reported on InfoQ, Microsoft Translator now supports over 100 languages and dialects, covering languages natively spoken by 72% of the world population. AWS offers Amazon Translate to localize websites and applications or translate large volumes of text for analysis.

Rafael Quevedo questions the accuracy of the new API:

The cloud projects are at mercy of the diversity team that designed them. Google Translator can claim that it can translate languages from all types using the existing literature, but can it deal with old style TV phrases? Or slang?

Cloud Translation charges customers by the amount of text processed by the service, starting at 20 USD per million characters. Additional charges apply for the Advanced API calls detectLanguage, translateText, batchTranslateText, translateDocument, and batchTranslateDocument. For example, TranslateDocument costs 0.08 USD for every page processed.

Adblock test (Why?)

Saturday, November 13, 2021

Google Document Translation Now Generally Available - InfoQ.com - Translation

Google Cloud recently announced the general availability of Document Translation, a new feature of Translation API Advanced that allows formatting of documents to be retained throughout the translation process.

Until now the translation of documents required that text was separated from the layout attributes, with the document’s structure either lost or recreated after the text translation. Sarah Weldon, product manager at Google, explains:

One of the biggest differentiators for Translation API Advanced’s document translation capabilities is the ability to do real-time, synchronous processing for a single file. For example, if you are translating a business document such as HR documentation, online translation provides flexibility for smaller files and provides faster results (...) Meanwhile, batch translation allows customers to translate multiple files into multiple languages in a single request.

The new service lets customers translate documents in over 100 languages and supports formats such as Docx, PPTx, XLSx, and PDF while preserving document formatting. The GA adds right to left language support for PDFs, preservation of font size, font color, font style, and hyperlinks for native PDFs and introduces configurable endpoints to store machine translation processing in the European Union.

To improve the accuracy of the results, Document Translation now supports four different translation approaches: customers can rely on Google’s SOTA translation models, import glossaries for specific terms and phrases defining preferred translations, choose a pre-trained model or build custom translation models with AutoML.

In a separate article, Tristan Li, customer engineer at Google, and Wayne Davis, customer engineering manager at Google, highlight the best practices for translating websites with Translation API. Google is not the only cloud provider offering API for document translation. As recently reported on InfoQ, Microsoft Translator now supports over 100 languages and dialects, covering languages natively spoken by 72% of the world population. AWS offers Amazon Translate to localize websites and applications or translate large volumes of text for analysis.

Rafael Quevedo questions the accuracy of the new API:

The cloud projects are at mercy of the diversity team that designed them. Google Translator can claim that it can translate languages from all types using the existing literature, but can it deal with old style TV phrases? Or slang?

Cloud Translation charges customers by the amount of text processed by the service, starting at 20 USD per million characters. Additional charges apply for the Advanced API calls detectLanguage, translateText, batchTranslateText, translateDocument, and batchTranslateDocument. For example, TranslateDocument costs 0.08 USD for every page processed.

Adblock test (Why?)

Meta AI Puts A Step Towards Building Universal Translation System - Analytics India Magazine - Translation


What does the curve arrow in the logo of Amazon signify? It simply portrays that one can get A to Z products from a single platform, making your task easy, right? The same will be the case when it comes to the translation system (production of text in one language from another).

To that end, Meta AI announced a new breakthrough and introduced a new multilingual model, outperforming present state-of-the-art bilingual models across 10 out of 14 language pairs, winning the Conference on Machine Translation (WMT) – a prestigious MT competition. The model thus introduced is a step towards building a universal translation system.

The Bottleneck

The ultimate goal of the machine translation (MT) field is to create a universal translation system that will allow everyone to access information and communicate more effectively. However, some of the existing fundamental limitations need to be resolved for that vision to be a reality in the future.

Presently, a lot of modern MT systems rely on bilingual models, which often necessitate a large number of labelled examples for each language pair and task. Unfortunately, there are many languages with limited training data, say, for example, Icelandic and Hausa. The shortcomings make the present approaches redundant. Also, the tremendous complexity makes it difficult for a platform like Facebook to scale present modes to practical applications, where billions of users post every day in hundreds of languages.

Meta to rescue

As per the team at Meta, the MT field needs a shift from bilingual models towards multilingual translation, to be precise, where a single model can translate many language pairs at once. Further, the step to introduce a better multilingual model stands to benefit both – low and high resource languages as they are simple, scalable and efficient. 

Image: Meta AI

Last year, Facebook AI (now Meta) introduced M2M-100 as the first multilingual model to translate any pair of 100 languages without relying on English-centric data. The team deployed different mining strategies to prepare a dataset with 7.5 billion sentences for 100 languages as translation data. The researchers employed a variety of scaling strategies to create a global model with 15 billion parameters that include data from related languages and reflect a more diversified script of languages and morphology. The model proves efficient for low resource languages. However, it loses high performance when it comes to high resource languages. 

Building on this previous model, the team made three new advancements for:

  • large-scale data mining
  • scaling model capacity
  • more efficient infrastructure

The team built two multilingual systems to train WMT 2021 model — any other language to English and English to any. They utilised parallel data mining techniques such as CCMatrix, which the company claims to be the largest dataset of web-based, high-quality bitexts for training translation models. CCMatrix dataset is more than 50 times larger than the WikiMatrix corpus Facebook provided earlier, with over 4.5 billion parallel phrases in 576 language pairs extracted from the CommonCrawl public dataset snapshots.

Additionally, the model capacity has been raised from 15 million parameters to 52 million. The large scale training was made five times faster than the previous models by adding a GPU memory-saving tool – Fully Sharded Data-Parallel from Meta itself. Further, it is important to note that scaling model size often results in high computational costs. To overcome, the team claims to have used a Transformer architecture with the FeedForward block in every alternate Transformer layer, which is then replaced with a Sparsely Gated Mixture-of-Experts layer with top-2 gating in the encoder and decoder. As a result of the same, only a subset of all the model’s parameters is used per input sequence. 

Machine translation has made significant progress in breaking down barriers, but most of it has focused on a small number of commonly spoken languages. Low-resource translation remains MT’s “last mile” dilemma and the subfield’s biggest open challenge today.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Adblock test (Why?)

Spotlight on Translation: November Edition - lareviewofbooks - Translation

In this, our ninth monthly spotlight, you’ll find reviews of literary theory translated from Russian, a novel translated from Icelandic, and verse translated from Bengali and from a French “infused with Mauritian Creole, Old Scandinavian, Old French, Hindi, Bhojpuri, Urdu, and various neologisms”; an essay, occasioned by a new novel translated from Spanish, on the legacy of Roberto Bolaño; as well as interviews with one of Russia’s most accomplished and provocative authors, Ludmilla Petrushevskaya, with the editor of Granta’s new “Best of Young Spanish-Language Novelists” issue, Valerie Miles, as well as with Michael Cooperson, translator of classical Arabic poet al-Ḥarīrī’s seminal 12th-century Maqāmāt, and with Curt Leviant, translator of Sholem Aleichem’s serialized Yiddish novella Moshkeleh the Thief. You’ll also find a conversation with Mark Haddon, co-framer, with Jennifer Croft, of an open letter calling on writers to ask their publishers to give translators cover credits.

— Boris Dralyuk, Editor-in-Chief


 

This digest is part of our year-round celebration of our 10th anniversary. To celebrate with us, please visit our anniversary page!

Adblock test (Why?)

NAR relies on the dictionary in its latest volley against the DOJ - Inman - Dictionary

The National Association of Realtors‘ latest volley in its case against the U.S. Department of Justice includes images from Merriam-Webster’s dictionary that define the words “close” and “open” — a stark example of how far apart the two sides are as they battle over whether the DOJ can investigate two of NAR’s rules.

On Friday, the 1.5 million-member trade group filed a response to the DOJ’s opposition last month to a petition from NAR to quash or modify the federal agency’s civil investigative demand seeking new information on rules regarding buyer broker commissions and pocket listings.

The thrust of NAR’s argument is that the DOJ agreed — as part of a settlement agreement the agency abruptly withdrew from on July 1 — to close investigations into two rules: NAR’s Clear Cooperation Policy, which requires listing brokers to submit a listing to their MLS within one business day of marketing a property to the public, and NAR’s Participation Rule, which requires listing brokers to offer commissions to buyer brokers in order to participate in Realtor-affiliated multiple listing services.

NAR contends that the DOJ did not actually close the investigations and the civil investigative demand the agency sent the trade group on July 6 continued those probes, which makes that demand “invalid.”

“In October 2020, the Antitrust Division unconditionally accepted NAR’s settlement offer, which required a commitment from the Antitrust Division to ‘close’ its investigation of the Participation Rule and Clear Cooperation Policy,” NAR’s attorneys wrote in the filing.

“In this context, the word ‘close’ is a verb that means ‘to bring to an end.’ That term must be construed according to its ‘ordinary meaning,’ and the Antitrust Division’s position, that it was free to ‘open’ that same investigation at any time, contradicts the clear meaning of the parties’ agreement.

“By its plain meaning, the verb ‘open’ means ‘to move (as a door) from a closed position’ or ‘to begin a course or activity.’ Thus, what the Antitrust Division has done is the exact opposite of what the word ‘close’ contemplates in the parties’ agreement.”

NAR’s response is 18 pages long with 547 pages of exhibits, including photographs of the dictionary entries for “close” and “open” starting on page 510.

‘Close’ means ‘close’

The DOJ’s public statements when it withdrew from the proposed settlement saying the agency was concerned the agreement would prevent its ability to protect real estate brokerage competition and would prevent it from pursuing other antitrust claims relating to NAR’s rules “show that it had agreed to close its investigation of the Participation Rule and Clear Cooperation Policy, and that it could not have open investigations concerning either rule,” NAR asserts in its filing.

“The Antitrust Division’s actions also confirm that it understood ‘close’ means ‘close,'” NAR’s attorneys wrote.

“Before trying to withdraw from the settlement, the Antitrust Division asked NAR to modify the settlement agreement to allow it to investigate the Participation Rule and Clear Cooperation Policy.

“The fact that the Antitrust Division sought a modification of the settlement agreement is a concession that the settlement, unless modified, does in fact place limits on the Antitrust Division’s ability to investigate NAR’s Participation Rule and Clear Cooperation Policy.

“The words and actions of the Antitrust Division therefore refute its claim that the settlement agreement imposes no limitation on further investigation of the Participation Rule and Clear Cooperation Policy.”

In a letter sent the same day as the settlement agreement was proposed in court, the DOJ informed NAR it had closed its investigations of the two policies. In that letter, the DOJ included a sentence that said, “No inference should be drawn, however, from the Division’s decision to close its investigation into these rules, policies or practices not addressed by the consent decree.”

According to NAR, that sentence does not mean that the DOJ reserved some right to future investigation of the policies because the DOJ did not ask for and the parties did not negotiate that term.

“Instead, it unconditionally accepted NAR’s settlement proposal, which did not include such a limitation, and that means that there was no such reservation in the settlement agreement,” the filing said.

“[T]hat sentence did not, and cannot, change the plain meaning of the settlement agreement. It only cautions third parties that they should not draw an inference from the Antitrust Division’s decision to close the investigation, which is non-controversial,” the filing added.

Because NAR did not negotiate the language of the closing letter or even see it before it was sent, then “the second sentence of the letter cannot be considered part of the agreement if it takes on the meaning proposed by the Antitrust Division,” NAR’s attorneys wrote.

“A party cannot introduce a new, material term to a contract after an agreement is reached simply because it no longer likes part of the deal it struck.”

What the filing does not address

NAR’s filing makes clear that the trade group considers the settlement agreement binding on the government and therefore its arguments to set aside the latest probe hinge on the court enforcing the deal.

But in its own, previous filing, the DOJ argued that the deal was not final and therefore the agency could withdraw from it.

“In 2020, the United States and NAR discussed, and the United States eventually filed, a proposed settlement that would have culminated in entry of a consent judgment by the Court,” the DOJ’s filing said.

“But no consent judgment was ever entered.”

The Antitrust Procedures and Penalties Act, known as the Tunney Act, required a public notice and comment period before any final settlement with NAR, and it was during this process that the DOJ’s Antitrust Division concluded that the reservation of rights provision in the proposed final judgment “should be revised to avoid potential confusion about whether the judgment would foreclose further action by the Division on matters not covered by the judgment,” the DOJ’s attorneys wrote.

When NAR did not agree to the modification, the agency withdrew from the deal as permitted by paragraph 2 of the proposed settlement, they added.

Screenshot from proposed NAR-DOJ settlement

NAR’s latest filing does not address this particular argument from the DOJ.

The DOJ acknowledged that it did agree to issue a closing letter confirming to NAR that it had closed an investigation of two of NAR’s policies. However, the agency’s filing indicates that it considers its latest demand a new investigation and not a continuation of the previous one, as NAR asserts.

“The three-sentence closing letter contained no commitment to refrain from future investigations of NAR or its practices or from issuing new CIDs in conjunction with such investigations,” the DOJ filing said.

‘NAR is moving forward’

This week, at its Realtors Conference & Expo, NAR is considering three MLS policy proposals inspired by the contested settlement with the DOJ: a rule preventing buyer agents from touting their services as “free”; a ban on filtering listings by commission or brokerage name; and a policy requiring MLSs to display buyer broker commissions on their listing sites and in the data feeds they provide to agents and brokers.

Mantill Williams

“The Department of Justice’s (DOJ) withdrawal from a fully binding and executed agreement goes against public policy standards and consumer interests,” NAR spokesperson Mantill Williams said in an emailed statement.

“NAR is moving forward on the pro-consumer measures in the agreement and remains committed to regularly reviewing and updating our policies for local broker marketplaces in order to continue to advance efficient, equitable and transparent practices for the benefit of consumers.

“Allowing the DOJ to backtrack on our binding agreement would not only undermine public confidence that the government will keep its word, but also undercut the pro-consumer changes advanced in the agreement. NAR is living up to its commitments to consumers — we simply expect the DOJ to do the same.”

Read NAR’s latest filing:

Email Andrea V. Brambila.

Like me on Facebook | Follow me on Twitter

Adblock test (Why?)

Friday, November 12, 2021

How the fan translation of Squaresoft's utterly bizarre Racing Lagoon came together in just 6 months - PC Gamer - Translation

In 1999, Japanese RPG developer Squaresoft was on top of the world. Final Fantasy 7 and Final Fantasy 8 were blockbuster successes and every other quirky RPG it released seemed destined to become a cult classic. But even at the peak of its popularity Square was still releasing games it decided were too niche, too hard to translate, or too Japanese to release in the west. One of those was Racing Lagoon, an RPG that blended trendy street racing and bizarre, almost poetic writing into a game that nearly defies description. Imagine if E.E. Cummings wrote the script for a Fast & Furious movie and you'll be on roughly the right track.

22 years later, Racing Lagoon is finally playable in English—and we have a fan translator who goes by the name 'Hilltop Works' to thank for channeling its singular style into English, in the process coining the best gaming diss since 'spoony bards.'

"This lady who's the boss of Chinatown throws an insult at you, and I wanted to use 'green beans,' an insult no one's used before, I don't think," he says. "But you hear it and you kind of understand what it means, you know? 'Green beans' means someone who's kind of young, not fit to be where you are. The line was: 'Get it, green beans? Chinatown has rules.'"

In Japanese the insult is something simple like "brat," but the goofy localization works in a game that's famously quirky even in Japan. Hilltop says Racing Lagoon has had something of a rediscovery at home in recent years, because even there there's nothing else like it. "They call the speech Lagoon-go, 'go' meaning accent, where every character adds in random English words and speaks very poetically." 

That unique language has made Racing Lagoon a challenging translation process, but it's also happened at a shocking pace in the world of fan translations. These projects often take years as volunteer writers and hackers come and go. Many are abandoned and never finished. But Hilltop announced Racing Lagoon's translation on May 23 and released the finished patch on November 11, just shy of six months later. 

"When I announced the project and when I released the prologue patch, nobody else had worked on it. I did the programming, I did the editing, I was planning to do everything on my own until people reached out to me," Hilltop says. But he didn't anticipate how many people had had similar reactions to Racing Lagoon over the years as he had when first discovering it.

Tune In

Racing Lagoon OST

(Image credit: Squaresoft)

PancakeTaicho cites Racing Lagoon's music as the main reason he fell in love with the game. "The soundtrack is a world unto itself that I just wanted to hang out in all the time," he says.

If you want to buy a super rare CD of the jazz fusion saxophone wailing over techno, be prepared to pay as much as $1,000.

"I just want people to see this game. This game is wild. This game is absolutely nutters crazy. There is just nothing like it, at all, and people need to see it," he says. "I think of this game like a beautiful diamond. It's a pure crystal—no part of it could really ever be recreated."

The late '90s street racing aesthetic is intensely nostalgic for 30-somethings who grew up watching Initial D, playing Gran Turismo, and lusting after Nissan Skylines. Suddenly there was a chance that this cult object could be playable in English, and people who loved the game jumped at the opportunity to help.

"My friends have been having to suffer through me talking about it non-stop for the past decade," says Syd-88, who joined the translation project not as a translator, but as an automotive consultant. Syd first played Racing Lagoon in 2011 and has wanted to help make it easier for other people to play it for years.

"The game dives into Japanese tuner culture as a whole in a way that I've never seen anything else before or after," Syd says. Gran Turismo was its contemporary, but only for legal racing. Tokyo Xtreme Racer tapped into street racing, but was more grounded, without Racing Lagoon's story or unique language.

Image 1 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 2 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 3 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 4 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 5 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 6 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)
Image 7 of 7

Racing Lagoon

(Image credit: Squaresoft / Hilltop Works)

Translator PancakeTaicho currently lives in Japan, where he first saw a copy of Racing Lagoon at a used game store on a trip in 2009. He loved Initial D, so he bought the game and unexpectedly found himself obsessed with the soundtrack. "I've listened to it more than anything in my whole life, I think," he says. PancakeTaicho actually tried to learn ROM hacking a few years ago and worked on Racing Lagoon, but didn't have the technical skills to make it work. When he saw Hilltop's tweet, he jumped at the chance to help translate. Before long, Hilltop's solo project had grown into an eight person team effort.

Hilltop works in videogame QA by day and on the Racing Lagoon translation in his spare time, divvying up the hefty script between volunteers and hosting editing sessions where they talk through scenes line-by-line. "Hilltop is like, I don't want to say jack of all trades, because that means it sounds like he's not good," PancakeTaicho says. "I think he's more like a one-man army. There's all the programming stuff, but I think he's also a really good localizer. He has a knack of helping find the right line, the right turn of phrase."

Racing Lagoon is actually only Hilltop's second-ever translation project after Dr. Slump, a PS1 game based on the comedic manga Akira Toriyama created before Dragon Ball. He studied computer science in college but never became a full-time programmer, and started learning Japanese years ago by listening to tapes on his commute.

"What could I do with these two skills? It was fan translation," he says. "I wanted to do something with my life. I was unemployed at the time, not really doing very well. And I had never really produced anything—ever, really—for public consumption."

Learning PS1 romhacking was difficult. For the first three months he was just trying to understand how to hack into Dr. Slump and wrap his head around data compression, a field of programming he didn't have any experience in. His notebook from the start of that project is filled with pages of assembly language code that he was trying to debug. Finally he understood it and was able to extract the script. On Racing Lagoon, the same process took only two days.

Though he now has a day job in gaming QA, Hilltop has found fan translation "hugely" fulfilling in a way no paying job ever has been. While most fan translators seem content to treat it purely as a hobby, and others are professional translators who take on the occasional passion project, Hilltop is somewhere in the middle. He started a Patreon for Hilltop Works, which states that if he can get 600 monthly backers, he'll quit his job and work on translation patches full time. When we talked midway through Racing Lagoon's translation, he hoped that the flurry of interest when it was finished would bring with it more Patreon backers. "If I could do this forever, I would 100%," he says. "I would much, much prefer this to just about anything."

The question now is whether the group that came together on this project will stick around for whatever Hilltop decides to translate next, or if Racing Lagoon was an irresistible anomaly. It really is rare to find a game with a history as rich as Racing Lagoon's, that ties so directly into the broader culture of when it was made.

"Somebody went around Japan and laser scanned a lot of the locations for it," says Syd-88. "So somebody had a lot of passion and wanted to capture that moment in time. Hell, I'm not sure you could recreate something like that today. It wouldn't have the same charm or effect."

Hilltop adds that there's a running theme in Racing Lagoon about how parts of Yokohama, where the game's set, are being westernized—that things that were once written in Japanese lettering are being written in English lettering as part of the "21st century shift."

Image 1 of 4

Racing Lagoon

Monster-R (R33 GTR) - "This unique R33 has a real-life counterpart: Built by a now defunct tuning shop, it’s a proper monster machine!" says Syd (Image credit: Squaresoft / Hilltop Works)
Image 2 of 4

Racing Lagoon

86-Lev - "Based on the Toyota AE86, your starter car was modified for extra power over the standard variants in-game" says Syd (Image credit: Squaresoft / Hilltop Works)
Image 3 of 4

Racing Lagoon

R30 - "Your team leader boasts a unique machine that’s normally unobtainable. It’s largely based off the ‘83 Nissan Skyline Super Silhouette racecar" says Syd (Image credit: Squaresoft / Hilltop Works)
Image 4 of 4

Racing Lagoon

Mini - "Racing Lagoon has a host of imported machines. This Mini, a 911, Camaro, and more exotics await you..." says Syd (Image credit: Squaresoft / Hilltop Works)

"A direct translation of the script would be gibberish," he says. "Half of it is weird, random English words—half of it is poetic nonsense, half of it is just obtuse ridiculousness. We have to cobble that together into a script that not only makes sense but still has that flavor, that still feels like you're playing a Squaresoft JRPG from the '90s… I wonder if really the whole thing is some sort of wild commentary that went over everyone's heads, in a way, about how the local culture, the local scene, was slowly getting overwritten by these western influences."

It's perhaps ironic that it took a full English translation to bring Racing Lagoon's commentary back to the surface after two decades. Even if the Racing Lagoon fan translators go their separate ways now, Hilltop has plenty of other ideas for PS1 games to work on next, and the hope to someday move beyond fan translations altogether while still remaining independent. 

He loves the whole process: writing, hacking, reworking graphics. "The absolute dream scenario is I would actually work on, say some company wants to re-release a PS1 game, they'll hand me the disc and say 'give me this in English,' he says. "That would make me… that would be a dream come true."

Adblock test (Why?)