Tuesday, March 30, 2021

West Michigan ad agency announces Bible translation campaign - grbj.com - Translation

HAVEN | a creative hub’s campaign will give people the opportunity to sponsor the translation of one or more Bible verses in partnership with one of the 3,800 language communities worldwide that don't yet have a complete Bible. Courtesy HAVEN

In partnership with illumiNations, an alliance of the world’s leading Bible translation organizations, a Grand Haven-based advertising agency rolled out the “I Want to Know” campaign.

HAVEN | a creative hub’s campaign will give people the opportunity to sponsor the translation of one or more Bible verses in partnership with one of the 3,800 language communities worldwide that don’t yet have a complete Bible.

“Can you imagine not having the Bible in English or your native language?” said Mart Green, ministry investment officer at Hobby Lobby and avid supporter of the illumiNations Bible translation movement. “One billion people still don’t know what God’s word has to say to them. We can help fulfill the Great Commission and eradicate Bible poverty in this generation.”

The goal of the campaign is that 95% of the world’s population will have access to a full Bible, 99.96% will have access to a New Testament and 100% will have access to at least some portion of Scripture by 2033.

“The goal of translating the Bible into every language for all people has been a Goliath of biblical proportions for generations,” Green said. “But now, we’re on the brink of a giant slingshot; every person can have at least a portion of the Bible in their own language within the next 12 years.”

To accomplish that goal, participants in the I Want to Know campaign can sponsor one translated verse of Scripture for $35.

Individuals also can post the Bible verse they “want the world to know” on social media using the hashtag #IWTKBible.

“The translators are in place, the strategy is in place and with support from Christians across the U.S. and around the world, we can help every single person on earth access scripture in the language they understand best,” said Bill McKendry, campaign creative director, founder and chief creative officer of HAVEN.

Python Guide to HiSD: Image-to-Image translation via Hierarchical Style Disentanglement - Analytics India Magazine - Translation


The image-to-Image translation is a field in the computer vision domain that deals with generating a modified image from the original input image based on certain conditions. The conditions can be multi-labels or multi-styles, or both. In recent successful methods, translation of the input image is performed based on the multi-labels and the generation of output image out of the translated feature map is performed based on the multi-styles. The labels and styles are fed to the models via texts or reference images. The translation sometimes takes unnecessary manipulations and alterations in identity attributes that are difficult to control in a semi-supervised setting.

Chinese researchers Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu and Rongrong Ji have introduced a new approach to control the image-to-image translation process via Hierarchical Style Disentanglement (HiSD).

HiSD breaks the original labels into tags and attributes. It ensures that the tags are independent of each other, and the attributes are mutually exclusive. While deploying, the model first looks for the tags and then the attributes in a sequential manner. Finally, shapes are defined by latent codes extracted from reference images. Thus improper or unwanted manipulations are avoided. The tags, the attributes and the style requirements are arranged in a crystal-clear hierarchical structure that leads to state-of-the-art disentanglement performance on many public datasets.

Tags, attributes and styles in HiSD
Hierarchical representation of tags, attributes and styles in HiSD

HiSD processes all the conditions (i.e., tags, attributes and styles) in an independent strategy so that they can be controlled alone or on inter-conditions or intra-conditions. The model extracts styles easily from the reference images by converting them into latent codes and Gaussian noises. It adds the style to the input image without affecting its identity or other styles, tags and attributes.



hisd
The multi-style task with the HiSD without any loss in identity
hisd
Multi-attribute task with the HiSD without any loss in identity
HISD
Multi-tag task with the HiSD without any loss in identity

Python implementation of HiSD

HiSD needs a Python environment and PyTorch framework to set up and run. Usage of a GPU runtime is optional. Pre-trained HiSD can be loaded and inference may be performed with a CPU runtime itself. Install dependencies using the following command.

!pip install tensorboardx

The following command downloads the source codes from the official repository to the local machine. 

!git clone https://github.com/imlixinyang/HiSD.git

Output: 

Change the directory to content/HiSD/ using the following command.

%cd HiSD/

Download the publicly available CelebAMask-HQ dataset from the google drive to the local machine to proceed further. Ensure that the train images are stored in the directory /HiSD/datasets and their corresponding labels are stored in the directory /HiSD/labels. The following command preprocesses the dataset for training.

!python /content/HiSD/preprocessors/celeba-hq.py --img_path /HiSD/datasets/ --label_path /HiSD/labels/ --target_path datasets --start 3002 --end 30002

The following command trains the model and fits the model configuration to the machine and dataset. It creates a new directory under the current path named ‘outputs’ to store its outputs.

!python core/train.py --config configs/celeba-hq.yaml --gpus 0

Once the dataset preprocessing and the model checkpoint restoration are finished, they can be used for similar applications. A sample implementation is carried out with the following simple python codes. First, import the necessary modules and libraries.

 %cd /content/HiSD/
 from core.utils import get_config
 from core.trainer import HiSD_Trainer
 import argparse
 import torchvision.utils as vutils
 import sys
 import torch
 import os
 from torchvision import transforms
 from PIL import Image
 import numpy as np
 import time
 import matplotlib.pyplot as plt 

Download the checkpoint parquet file from the official page using the following command.

!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1KDrNWLejpo02fcalUOrAJOl1hGoccBKl' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1KDrNWLejpo02fcalUOrAJOl1hGoccBKl" -O checkpoint_256_celeba-hq.pt && rm -rf /tmp/cookies.txt

Output:

Move the checkpoint parquet file to the /HiSD directory using the following commands.

 %cd /content/
 !mv checkpoint_256_celeba-hq.pt HiSD/ 

Load the checkpoint and prepare the model for inference using the following codes.

 device = 'cpu'
 config = get_config('configs/celeba-hq_256.yaml')
 noise_dim = config['noise_dim']
 image_size = config['new_size']
 checkpoint = 'checkpoint_256_celeba-hq.pt'
 trainer = HiSD_Trainer(config)
 # assumed CPU device
 # if GPU is available, set map_location = None
 state_dict = torch.load(checkpoint, map_location=torch.device('cpu'))
 trainer.models.gen.load_state_dict(state_dict['gen_test'])
 trainer.models.gen.to(device)
 E = trainer.models.gen.encode
 T = trainer.models.gen.translate
 G = trainer.models.gen.decode
 M = trainer.models.gen.map
 F = trainer.models.gen.extract
 transform = transforms.Compose([transforms.Resize(image_size),
                                 transforms.ToTensor(),
                                 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) 

Define a function to perform the image-to-image translation.

 def translate(input, steps):
     x = transform(Image.open(input).convert('RGB')).unsqueeze(0).to(device)
     c = E(x)
     c_trg = c
     for j in range(len(steps)):
         step = steps[j]
         if step['type'] == 'latent-guided':
             if step['seed'] is not None:
                 torch.manual_seed(step['seed'])
                 torch.cuda.manual_seed(step['seed']) 
             z = torch.randn(1, noise_dim).to(device)
             s_trg = M(z, step['tag'], step['attribute'])
         elif step['type'] == 'reference-guided':
             reference = transform(Image.open(step['reference']).convert('RGB')).unsqueeze(0).to(device)
             s_trg = F(reference, step['tag'])
         c_trg = T(c_trg, s_trg, step['tag'])
     x_trg = G(c_trg)
     output = x_trg.squeeze(0).cpu().permute(1, 2, 0).add(1).mul(1/2).clamp(0,1).detach().numpy()
     return output 

The following commands set the desired tags, the attributes and the styles to perform translation. They use in-built example images for translation. Users can opt for their own data images.

See Also

First example inference:

 input = 'examples/input_0.jpg'
 # e.g.1 change tag 'Bangs' to attribute 'with' using 3x latent-guided styles (generated by random noise). 
 steps = [
     {'type': 'latent-guided', 'tag': 0, 'attribute': 0, 'seed': None}
 ]
 plt.figure(figsize=(12,4))
 for i in range(3):
     plt.subplot(1, 3, i+1)
     output = translate(input, steps)
     plt.imshow(output, aspect='auto')
 plt.show() 

Output:

Second example inference:

 input = 'examples/input_1.jpg'
 plt.figure(figsize=(12,4))
 # e.g.2 change tag 'Glasses' to attribute 'with' using reference-guided styles (extracted from another image). 
 steps = [
     {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_0.jpg'}
 ]
 output = translate(input, steps)
 plt.subplot(131)
 plt.imshow(output, aspect='auto')
 steps = [
     {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_1.jpg'}
 ]
 output = translate(input, steps)
 plt.subplot(132)
 plt.imshow(output, aspect='auto')
 steps = [
     {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_2.jpg'}
 ]
 output = translate(input, steps)
 plt.subplot(133)
 plt.imshow(output, aspect='auto')
 plt.show() 

Output:

Third example inference:

 input = 'examples/input_2.jpg'
 # e.g.3 change tag 'Glasses' and 'Bangs 'to attribute 'with', 'Hair color' to 'black' during one translation. 
 steps = [
     {'type': 'reference-guided', 'tag': 0, 'reference': 'examples/reference_bangs_0.jpg'},
     {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_0.jpg'},
     {'type': 'latent-guided', 'tag': 2, 'attribute': 0, 'seed': None}
 ]
 output = translate(input, steps)
 plt.figure(figsize=(5,5))
 plt.imshow(output, aspect='auto')
 plt.show() 

Output:

Performance of HiSD

HiSD is trained and evaluated on the famous CelebA-HQ dataset with 30,000 facial images of celebrities with labels of tags and attributes such as hair colour, presence of glasses, bangs, beard and gender. The first 3,000 images are used as test images, and the remaining 27,000 images are used as train images. Competitive models are also trained with the same dataset under identical device configurations for enabling comparison. 

hisd
Qualitative comparison of HiSD with the recent state-of-the-arts, SDIT and StarGANv2
HiSD
Qualitative comparison of HiSD with the recent state-of-the-art, ELEGANT

HiSD outperforms the current state-of-the-art models including SDIT, ELEGANT, and StarGANv2, on the FID scale (Frechet Inception Distance), which measures realism & FID scale that measures the disentanglement. 

Note: Images and illustrations other than the code outputs are taken from the original research paper and the official repository.

Further reading


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.

Industry-Leading Brands Adopt Translations.com-Adobe Integration for Multilingual Content Management at Record Pace - Business Wire - Translation

NEW YORK & SAN JUAN, Puerto Rico--(BUSINESS WIRE)--Translations.com, the technology division of TransPerfect, the world’s largest provider of language and technology solutions for global business, today announced that more than 30 leading brands have implemented GlobalLink® Connect’s Adobe integrations to manage their global enterprise content in 2020. These integrations allow businesses to leverage GlobalLink's translation workflow management from within the user interface of Adobe applications.

“Adobe works closely with technology partners like Translations.com to help our customers take full advantage of their investment in our solutions,” said Nik Shroff, Senior Director, Global Technology Partners at Adobe. “For over twelve years, Translations.com has helped brands around the globe find new and interesting ways to scale, launch, and maintain multilingual digital experiences. As a Premier partner, Translations.com’s integration with Adobe Experience Cloud will continue to give our customers the ability to reach new markets faster than ever.”

Translations.com is a Premier Partner in the Adobe Exchange Program with more than 150 shared customers and over 12 years of experience. As a Platinum sponsor of this year’s Adobe Summit, the company will showcase success stories from Novo Nordisk, Honeywell, Amplifon, and GF Machining Solutions. Attendees can register for the session, webcast, and more at the dedicated Adobe Summit landing page.

GlobalLink Connect provides an end-to-end solution that manages all facets of the translation process. Many of Adobe’s offerings, including Adobe Experience Cloud and Adobe Creative Cloud, combine with GlobalLink Connect’s workflow capabilities to create a seamless plug-and-play solution with virtually no IT overhead. Users benefit from streamlined management and control over customer experiences in multiple languages.

GlobalLink Connect’s Adobe integrations include:

  • Adobe Experience Manager
  • Adobe Magento Commerce
  • Adobe Marketo Engage
  • Adobe Creative Cloud
  • Adobe InDesign Server
  • Adobe Component Content Management System

GlobalLink Connect features include:

  • Scheduled or on-demand translations via Adobe’s UI
  • Dashboard view of translation spend and other KPIs
  • Internal or external vendor management
  • Flexible workflows featuring machine translation, human translation, or both
  • Rapid ROI via reduced IT involvement and project management overhead

Scott Rathburn, Global Localization Lead and Senior Content Editor from Haas Automation and a GlobalLink-Adobe integration user, commented, “GlobalLink is the foundation of Haas Automation’s global localization strategy. Since deploying in 2018, we have more than doubled our number of locales while reducing our time-to-market for new content and improving efficiencies company-wide. We change content monthly, if not weekly, and it’s frequently targeted by region. We simply could not do what we do without GlobalLink’s expansive capabilities and the phenomenal Translations.com support team behind it. We’re looking forward to Translations.com’s upcoming Adobe Summit session and webcast, and we’re excited to see what the future has in store.”

TransPerfect President and CEO Phil Shawe stated, “We are excited to highlight some of our most exciting joint success stories at the Adobe Summit. Adobe has been a key technology partner for us for over 12 years, and we look forward to expanding that partnership in the future. New joint customers have onboarded our GlobalLink-Adobe integrated solutions at a record pace in 2020 and, most importantly, those clients are benefiting from the ability to manage multilingual content directly from their Adobe user interface.”

About Translations.com

Translations.com is the world’s largest provider of enterprise localization services and technology solutions. From offices in over 100 cities on six continents, Translations.com offers a full range of services in 170+ languages to clients worldwide. More than 5,000 global organizations employ Translations.com’s GlobalLink® technology to simplify management of multilingual content. Translations.com is part of the TransPerfect family of companies, with global headquarters in New York and regional headquarters in London and Hong Kong. For more information, please visit www.translations.com.

NEC Develops Multilingual Speech Translation System 'DokoMinaPhone' - AiThority - Translation

NEC Develops Multilingual Speech Translation System 'DokoMinaPhone'
Banner 1000×100

Lost in translation: Rape suspect says he misunderstood Spanish Miranda warning - The Winchester Star - Translation

WINCHESTER — Rape suspect Jamie Barba Campirano speaks Spanish but says he doesn't read it well.

Barba's purported lack of reading comprehension was the basis for challenging the admissibility of possible incriminating statements he made to police. He said didn't understand the Miranda warning against self-incrimination in Spanish that he read on a card given to him by police.

The June 16 interrogation occurred at the Frederick County home of Campirano's girlfriend after a 14-year-old girl told police Campirano raped her on June 14. Campirano's DNA didn't match what was on the girl's clothing, according to court documents. But police say after Campirano waived his Miranda right, he admitted to briefly having sex with the girl and knew she was underage.

The purported admission came after Campirano was recorded on the body camera of Deputy Thomas Wyatt of the Frederick County Sheriff's Office reading the card and agreeing to talk. Nonetheless, Campirano, who doesn't speak English, testified on Monday in Frederick County Circuit Court that he didn't understand that what he read was about his right to remain silent and have an attorney represent him. He said he only spoke to police because was nervous.

"I have to read things several times before I understand them. I have to take time to read them," Campirano testified through a translator. "I just read it once, and I didn't understand anything."

On video from Mexico City, the principal at Campirano's elementary school testified that Campirano dropped out of school after the sixth grade. Records from the school introduced by Campirano's attorneys showed he struggled to read. Campirano, who said he dropped out to go to work to support his widowed mother, said he avoided reading aloud at school because he stuttered and his classmates laughed at him.

But Kristen G. Zalenski, an assistant commonwealth's attorney, contended the 22-year-old Campirano was playing dumb. She noted Campirano, convicted in Winchester in November 2020 of leaving the scene of an accident, had been read his Miranda warning in November 2019 by a translator, so he should've understood what was on the card.

"Was the defendant lying on June 16, 2020, or is he lying today in court? It comes down to that," Zalenski told Judge William Warner Eldridge IV. "Just because you are scared and speak a different language doesn't mean you can't knowingly give up your Miranda rights."

Since the 1966 Supreme Court Miranda v. Arizona case mandating suspects be told by police before questioning that they have a right to an attorney and to remain silent, Miranda warnings have become ubiquitous on TV crime shows and movies and are widely known by Americans. But Campirano didn't come to the U.S. until 2019. He said he rarely read until he began reading the Bible and newspapers after being jailed.

Defense attorneys Gerardo M. Delgado and Jason Ransom argued that it was plausible that a slow reader who'd only been read a Miranda warning once by a translator seven months earlier might not understand the card that was handed to him. They pointed to testimony by two Spanish-speaking translators who said Campirano's reading of the warning, captured on the deputy's body camera video, was rushed and often unintelligible.

"It wasn't read properly," translator Manuel Prado testified on Monday. "There were no pauses, no inflections. It was a run-on [sentence]."

Ranson noted that it's natural for people in routine traffic stops to be nervous when questioned by police. In Campirano's case, he was facing questions about a rape.

"Is it really hard to believe you may not be focused on what you're reading?" Ransom asked Eldrige. "You come from Mexico with a sixth-grade education and you're scared of the police. Are you really soaking in what you're reading?"

Eldridge said he was having a difficulty deciding whether to allow or suppress Campirano's statements to police. He noted previous challenges to Spanish Miranda warnings have focused on inaccurate translations. Spanish-language Miranda warnings are issued about 900,000 times annually, according to the American Bar Association. In 2019, the ABA sought a uniform Spanish-language warning from police to ensure accuracy.

Eldridge said he was unable to find any legal precedents about Spanish-language Miranda challenges based on the reading comprehension of the defendant. Eldridge said he wanted to more time to study the case and said he would rule at 10 a.m. Friday. "This is an unusual circumstance and a lot is at stake for both parties," he said.

Monday, March 29, 2021

Bill Kirby: I would like to have a word with you - The Augusta Chronicle - Dictionary

Dictionaries are like watches; the worst is better than none, and the best cannot be expected to go quite true.”

― Samuel Johnson

Welcome back to Professor Bill's School of Wayward Words.

I continue to be surprised by the words in the English language that I did not know existed. As someone who has read and edited for decades, I'm often amazed, not by the technical words or phrases, but by a simple combination of letters that reveals something I'd never seen.

I save these on my computer laptop and am now up to 112.

Here are some of the latest.

Cossetted – It means "pampered." 

Chiasm – An intersection or crossing of two tracts in the form of the letter "X". I like to think of it this way: It's the opposite of a "chasm" – that separation between two bodies, such as a canyon. A chiasm ("chi" is the Greek word for "X") is where two bodies connect.

Heuristic – In mathematics or decision making, a heuristic is basically a shortcut. You sacrifice accuracy for speed and say, "Close enough." (Which is the way my wife says I figure out income taxes.)

Porphyry – "A textural term for an igneous rock consisting of large-grained crystals such as feldspar or quartz dispersed in a fine-grained silicate." (I thought it was that stuff you put in small bowls to make a room smell good.) 

Koan – It comes from Zen Buddhism and is a paradox or riddle that illustrates shortfalls in logical reasoning. One of the examples offered is whether a tree falling in the woods creates a sound if there is no one to hear it.

ANOTHER COLLECTION: Yes, I am also still "collecting" license tags, keeping track of different states in my travels since last summer. I was down to two until last week when I spotted Hawaii on a dark gray Honda in front of me at the stoplight. Now I'm down to one – Rhode Island. Providence, I hope, will help me out.

BASEBALL BEGINS: Major League Baseball returns Thursday with the Braves opening in Philadelphia.

I like to think this means my life will return to a little bit of normal, but normal is hard to define anymore. Take our local professional team, the Augusta GreenJackets. They don't begin their season until May.

No April games, which used to be my favorites. The players were all new and the nights were not as humid.

TODAY'S JOKE: A man goes into a drug store and asks the pharmacist if he has anything to cure hiccups.

The druggist leans forward and abruptly slaps the man's face.

"What was that for?" the startled gentleman asks, rubbing his jaw.

"Well," said the pharmacist, "you don't have the hiccups anymore, do you?"

"No," said the man, "but my wife out in the car still does."

Waverly Labs Ambassador Interpreter Review: The world just got smaller - iMore - Translation

Ambassador Interpreter ConverseSource: Jaclyn Kilani / iMore

The future has arrived — that was the thought I had the first time I used the Waverly Labs Ambassador Interpreter. This gadget is like something out of Star Trek. You set the app up on your smartphone, put the earphone in your ear, and just like that, you're communicating in a new language. The Interpreter can translate 20 different prominent languages in 42 different dialects. This is a game changer, especially for someone like me who loves to travel the world.

So far I have had a chance to test this unit out in English, Spanish, and Arabic. It worked well in all three languages, although it's a little stronger in the Spanish to English translations, perhaps because these two languages have similar Latin roots. Regardless, I was able to communicate adequately in all three languages with people of several different nationalities. I noticed lots of advantages and a few disadvantages to using this device — all of these are outlined below.

Ambassador Interpreter 3 Copy

Ambassador Interpreter

Bottom line: The Ambassador Interpreter by Waverly Labs will change the way you travel. It erases communication barriers in 20 different languages, and it's all contained in an easy-to-use earbud.

The Good

  • Works beautifully
  • Easy to set up and use
  • Includes a wide array of languages and dialects
  • Translates into my ear so only I can hear it
  • Includes text translation in case I didn't understand the first time

The Bad

  • Requires me to re-pair earbuds each time
  • There's no history of translation logs
  • Translation is not always 100% accurate

Ambassador Interpreter: Price and availability

Waverly Labs Interpreter

Source: Jaclyn Kilani / iMore

The Ambassador Interpreter is currently only available as a direct purchase from the manufacturer. This is a new product invention that is not yet available from mainstream retailers. One advantage of this is that Waverly Labs has a good return and exchange policy, so you can trust that any product you receive from them will be backed by a solid warranty. It's currently available for $179, which is a great deal since you receive two interpreter ear-pieces that are meant to be used by two different people. I think this product is worth more than its modest price tag.

Ambassador Interpreter: The triple language challenge

Ambassador Interpreter

Source: Jaclyn Kilani / iMore

Aside from the obvious cool factor, I appreciate the fact that the Ambassador Interpreter system is super easy to set up and use. Once the app was downloaded, I paired the earbuds to my iPhone 12 Pro Max the same way I would pair any wireless earphones. From there, I just chose which mode I wanted to use (Listen, Converse, or Lecture) and the translator started murmuring the translation in my ear in real time. It also translated all speech into a written translation on the iPhone, so I could read as well as hear what was being said. Very simple and straightforward.

At first I used the Listen mode in Spanish only. I set it up in Colombian Spanish and told my (Colombian) husband to speak to me in Spanish. Since I am fluent in Spanish, I was able to judge the quality of the translation. The translation in Spanish was almost perfect if my husband spoke clearly. If he spoke super fast or ran the words together, the Ambassador Interpreter would request that he repeat himself. I was curious how the translation would stack up against an automatic translator like the Google Translator app, so I tested them against each other. The translations were very similar, but due to the dialects that the Ambassador Interpreter has installed, the Waverly Labs device was able to more closely translate Colombian slang, while Google stuck to a very direct, generic translation. All in all, the Ambassador Interpreter won against Google Translate because they do have the local dialects and colloquialisms installed into the device.

I remember having the hardest time of it when I got lost on the Paris Metro years ago; what a difference this device would have made!

My next experiment was to try it out in Converse Mode to see how it worked with an Arabic to English translation and vice versa. My mother in law is half Syrian (how's that for multicultural?), but she doesn't speak a word of English, so it was an interesting exchange. We both wore one Ambassador Interperter earphone; mine was set to translate her Arabic speech to English, and hers was set to translate English to Arabic. Everything we said came out as a text translation in the Ambassador Interpreter app. We talked about the weather in Syria and our favorite foods. It was a fun experience, and very strange since we usually only communicate in Spanish.

According to my mother-in-law and some Syrian cousins who also tried out the device, the Arabic translation from English is not as accurate as the English to Spanish translation, but it allowed us all to communicate with each other just fine. I am super excited to use this device for traveling. I remember having the hardest time of it when I got lost on the Paris train years ago — what a difference this device would have made! It will also be invaluable for my husband's upcoming trip to Syria. He has no language in common with the Syrian side of his family, so this device will make it possible to communicate with his extended family in a way that would have been otherwise impossible without someone to serve as translator.

Finally, I did try out the "Lecture" mode, which allows you to speak using the device to translate for you on the iPhone or a Bluetooth speaker. This is excellent if you need to lecture or make presentations in a second language. Your business can save a load of money by using an Ambassador Interpreter instead of hiring a live interpreter. I can imagine a lot of scenarios where this feature would be incredibly useful in business environments.

Ambassador Interpreter: What's not so good

Waverly Labs Ambassador Interpreter

Source: Jaclyn Kilani / iMore

As I mentioned in the previous section, the translation from Arabic to English (and vice versa) was not as accurate as the translations between Spanish and English. This is likely due to the fact that Spanish and English are simply more similar to each other than English and Arabic, which have roots in completely different regions and languages. Another reason for this may be that there is no "Syrian" dialect in the Ambassador Interpreter, whereas we did have a specific "Colombian" dialect of Spanish. I assume that if I were speaking with someone from Lebanon using the Lebanese dialect, the translation would likely be more accurate. Either way, we were able to communicate and understand each other in Arabic, so that's the point of the Interpreter in the first place. The app and devices are continuously updated, so this aspect will likely be improved over time.

The translation from Arabic to English (and vice versa) was not as accurate as the translation between Spanish and English.

I also noticed that I have to re-pair the earbuds in the Bluetooth settings panel every time I use them. This is not a huge deal, but it's something to point out, since most of my Bluetooth earphones connect automatically after the first pairing. I haven't tried to troubleshoot this problem with Waverly Labs; perhaps it is an easy fix. Regardless, it does not affect the usage or effectiveness of the product.

Finally, I'd like to see a way to record logs of translations and conversations. If I were using the Ambassador Interpreters for work purposes, I think a translation log history would come in handy in case I forgot some part of the conversation, or if I just wanted to review it at a later time. Perhaps the manufacturer will add this feature in the future.

Ambassador Interpreter: Competition

Wt

Source: Time kettle

The Ambassador Interpreter is an innovative new product, and there are few products in the marketplace right now to contend with it. The only other comparable products I can find are the WT2 Plus AI Realtime Translator Earbuds from Timekettle. The WT2 offers a similar service, translation earbuds that work with a specialized smartphone app. These have the capability to translate 40 different languages - more than the Ambassador Interpreter - but they are also $70 more expensive.

Ambassador Interpreter: Should you buy it?

You should buy this if ...

  • You travel a lot for pleasure or work.
  • You work on a regular basis with people who speak different languages.
  • You're learning a second language.
  • You have family members from different parts of the world.

You shouldn't buy this if...

  • You've never left your hometown and don't plan to.
  • You've never met anyone that speaks a different language.
  • You need a detailed history of each translation.

4.5 out of 5

World travelers, international businesses, and intercultural families can all reap huge benefits from this smart device. For business travel alone, this inexpensive device is much more affordable than a live translator. I personally plan to use it during my travels, and my husband plans to use it to speak to long-distance family members in other parts of the world. The translations are accurate and fast, with both spoken and written translations to make sure everyone stays on the same page. With Listen, Converse, and Lecture modes, there are a lot of different ways to use the Ambassador Interpreter, and a lot of different situations that call for this type of technology. Honestly, the possibilities are endless.

Ambassador Interpreter 3 Copy

Waverly Labs Ambassador Interpreter

Bottom line: The world just got smaller. Make business interactions and international travel more convenient with this nifty automatic translator. With 20 different languages and 42 different dialects, there are few places in the world where the Ambassador Interpreter won't come in mighty handy.