New Delhi, Mar 31 (PTI) Slamming West Bengal Chief Minister Mamata Banerjee, the BJP on Wednesday said that democracy should be the last word in her dictionary as it hit back at her following her letter to opposition leaders against the Narendra Modi government's alleged assault on democracy. 'Democracy should be the last word in @MamataOfficial & @AITCofficial dictionary. Their cadre attack @BJP4Bengal candidates, intimidate voters, capture booths, block all hoardings and at the end leaders preach Democracy,' BJP general secretary (organisation) B L Santhosh tweeted. The sharp attack by Santosh came following Banerjee's letters to non-BJP leaders, expressing serious concern over alleged 'assaults' by the BJP and its government on democracy and constitutional federalism of India. Ahead of the second phase of polls in the state, Banerjee's letter, which was released by the TMC on Wednesday, seeks to drum up support from opposition leaders by highlighting how non-BJP states have suffered due to the saffron party-led Centre's actions. PTI KR PYK PYK
Wednesday, March 31, 2021
Bible translation organisations join to “make God's Word accessible to all by 2033” - Evangelical Focus - Translation
Ten of the world’s leading Bible translation organisations have recently launched the “I Want to Know” campaign, which aims to “make God's Word accessible to all people by 2033”.
This alliance of Bible translation partners is called IllumiNations and includes the American Bible Society, Biblica, Deaf Bible Society, Lutheran Bible Translators, Seed Company, SIL International, United Bible Societies, The Word for the World, Pioneer Bible Translators and Wycliffe Bible Translators USA.
According to IllumiNations, over 1 billion people lack access to God’s Word in their language, 3,800 language communities worldwide do not have a complete Bible, and more than 2,000 of those languages do not have a single verse of Scripture translated yet.
The project hopes that “95% of the world’s population will have access to a full Bible, 99.96% will have access to a New Testament and 100% will have access to at least some portion of Scripture in 12 years”.
“Imagine your life if you didn’t know the Truth. Didn’t know the unconditional love of Jesus. Didn’t know the life-altering Word of God, because it didn’t exist in your language. That’s the grim reality for over one billion people around the world”, says the campaign.
And it adds: “We are on a mission to change that, because knowing the Truth changes everything”.
According to its creators, the "I Want to Know" campaign is the largest Bible translation campaign introduced on social and digital media and it shows testimonies of 6 people who don't yet have access to the full Bible in their own language.
Participants in the initiative can sponsor “one Bible verse translated in a language awaiting God’s Word” for $35. They are also encouraged to post the Bible verse they “want the world to know” on social media using the hashtag #IWTKBible.
“The translators are in place, the strategy is in place, and with support from Christians across the U.S. and around the world, we can help every single person on earth access Scripture in the language they understand best”, pointed out Bill McKendry, campaign creative director.
Mart Green, ministry investment officer at retail company Hobby Lobby, rallied with resource partners and translation agencies to form illumiNations in 2010, with the goal of translating the Bible into every language for all people, “a 'Goliath' of biblical proportions for generations".
"But now we are on the brink of a giant slingshot; every person can have at least a portion of the Bible in their own language within the next 12 years", he added.
According to Green, “no other Scripture translation project in history has been this ambitious or this well-coordinated, and never before have translators had the ability through technology and software to supercharge translation at such a rapid pace. The strategy, the people and the technology are in place to make it happen”.
“Can you imagine not having the Bible in English, or your native language? One billion people still don’t know what God’s Word has to say to them. We can help fulfil the Great Commission and eradicate ‘Bible poverty’ in this generation”, concluded Green.
Published in: Evangelical Focus - culture - Bible translation organisations join to “make God’s Word accessible to all by 2033”
Should White Writers Translate a Black Author’s Work? - The New York Times - Translation
Students in U.S. high schools can get free digital access to The New York Times until Sept. 1, 2021.
Do you ever read books, plays or short stories that have been translated from another language? Have you ever read a book for school that was translated, such as “The Stranger” by Albert Camus, “The Diary of a Young Girl” by Anne Frank, “Don Quixote” by Miguel de Cervantes, “Madame Bovary” by Gustave Flaubert, “Siddhartha” by Herman Hesse or “A Doll’s House” by Henrik Ibsen?
When reading translated works, have you ever thought about the choices the translator made about language and sentence structure, and how those might affect the message of the story, play or poem? Have you ever thought about the translator’s identity? How much do you think a translator’s race, ethnicity, nationality, gender or ability has to do with the translation?
In “Amanda Gorman’s Poetry United Critics. It’s Dividing Translators,” Alex Marshall writes about a debate in Europe about who should be asked to translate work by writers of color. (If you haven’t read Ms. Gorman’s poem, you can read the transcript here.)
Hadija Haruna-Oelker, a Black journalist, has just produced the German translation of Amanda Gorman’s “The Hill We Climb,” the poem about a “skinny Black girl” that for many people was the highlight of President Biden’s inauguration.
So has Kubra Gumusay, a German writer of Turkish descent.
As has Uda Strätling, a translator, who is white.
Literary translation is usually a solitary pursuit, but the poem’s German publisher went for a team of writers to ensure the poem — just 710 words — wasn’t just true to Gorman’s voice. The three were also asked to make its political and social significance clear, and to avoid anything that might exclude people of color, people with disabilities, women, or other marginalized groups.
For nearly two weeks, the team debated word choices, occasionally emailing Ms. Gorman for clarifications. But as they worked, an argument was brewing elsewhere in Europe about who has the right to translate the poet’s work — an international conversation about identity, language and diversity in a proud but often overlooked segment of the literary world.
“This whole debate started,” Gumusay said, with a sigh.
It began in February when Meulenhoff, a publisher in the Netherlands, said it had asked Marieke Lucas Rijneveld, a writer whose debut novel won last year’s Booker International Prize, to translate Gorman’s poem into Dutch.
Rijneveld, who uses the pronouns they and them, was the “ideal candidate,” Meulenhoff said in a statement. But many social media users disagreed, asking why a white writer had been chosen when Gorman’s reading at the inauguration had been a significant cultural moment for Black people.
Three days later, Rijneveld quit.
Then, the poem’s Catalan publisher dropped Victor Obiols, a white translator, who said in a phone interview his publisher told him his profile “was not suitable for the project.”
Literary figures and newspaper columnists across Europe have been arguing for weeks about what these decisions mean, turning Ms. Gorman’s poem of hope for “a nation that isn’t broken, but simply unfinished” into the latest focus of debates about identity politics across the continent. The discussion has shone a light on the often unexamined world of literary translation and its lack of racial diversity.
Students, read the entire article, then tell us:
-
What is your reaction to the debate in Europe? Do you think white writers should translate a Black author’s work? How much does a translator’s racial identity matter? Should other aspects of identity beyond race — class, political views, ability, religion, nationality — be taken into account when publishers are deciding who should translate a written work?
-
How would you describe the work and responsibility of a translator? Is he or she obligated to stay true to the exact words, phrases, meanings and intentions of the original writer? Or do you think it is important for the translator to find ways for those same words and meanings to translate not only linguistically, but also culturally, to the audience he or she is writing for?
-
Think about the language or languages you speak: What are some of the nuances — vocabulary, dialect and grammar use — that could be changed or lost as a result of a translation? If you speak multiple languages, what are some of the limitations or differences in language that make translation difficult? The featured article uses gendered language as one example, but what are others that you can think of?
-
The American Literary Translators Association argued that the framing of this debate is false: Instead of “whether identity should be the deciding factor in who is allowed to translate,” the real problem is “the scarcity of Black translators.” Do you agree or disagree? Why?
-
Some countries have asked musicians or rappers to translate Ms. Gorman’s poem. What do you think about this approach? Do you think that people who are not necessarily professional translators could, or should, be invited to translate work? What are some of the advantages and disadvantages of taking this route?
Tuesday, March 30, 2021
West Michigan ad agency announces Bible translation campaign - grbj.com - Translation
In partnership with illumiNations, an alliance of the world’s leading Bible translation organizations, a Grand Haven-based advertising agency rolled out the “I Want to Know” campaign.
HAVEN | a creative hub’s campaign will give people the opportunity to sponsor the translation of one or more Bible verses in partnership with one of the 3,800 language communities worldwide that don’t yet have a complete Bible.
“Can you imagine not having the Bible in English or your native language?” said Mart Green, ministry investment officer at Hobby Lobby and avid supporter of the illumiNations Bible translation movement. “One billion people still don’t know what God’s word has to say to them. We can help fulfill the Great Commission and eradicate Bible poverty in this generation.”
The goal of the campaign is that 95% of the world’s population will have access to a full Bible, 99.96% will have access to a New Testament and 100% will have access to at least some portion of Scripture by 2033.
“The goal of translating the Bible into every language for all people has been a Goliath of biblical proportions for generations,” Green said. “But now, we’re on the brink of a giant slingshot; every person can have at least a portion of the Bible in their own language within the next 12 years.”
To accomplish that goal, participants in the I Want to Know campaign can sponsor one translated verse of Scripture for $35.
Individuals also can post the Bible verse they “want the world to know” on social media using the hashtag #IWTKBible.
“The translators are in place, the strategy is in place and with support from Christians across the U.S. and around the world, we can help every single person on earth access scripture in the language they understand best,” said Bill McKendry, campaign creative director, founder and chief creative officer of HAVEN.
Python Guide to HiSD: Image-to-Image translation via Hierarchical Style Disentanglement - Analytics India Magazine - Translation
The image-to-Image translation is a field in the computer vision domain that deals with generating a modified image from the original input image based on certain conditions. The conditions can be multi-labels or multi-styles, or both. In recent successful methods, translation of the input image is performed based on the multi-labels and the generation of output image out of the translated feature map is performed based on the multi-styles. The labels and styles are fed to the models via texts or reference images. The translation sometimes takes unnecessary manipulations and alterations in identity attributes that are difficult to control in a semi-supervised setting.
Chinese researchers Xinyang Li, Shengchuan Zhang, Jie Hu, Liujuan Cao, Xiaopeng Hong, Xudong Mao, Feiyue Huang, Yongjian Wu and Rongrong Ji have introduced a new approach to control the image-to-image translation process via Hierarchical Style Disentanglement (HiSD).
HiSD breaks the original labels into tags and attributes. It ensures that the tags are independent of each other, and the attributes are mutually exclusive. While deploying, the model first looks for the tags and then the attributes in a sequential manner. Finally, shapes are defined by latent codes extracted from reference images. Thus improper or unwanted manipulations are avoided. The tags, the attributes and the style requirements are arranged in a crystal-clear hierarchical structure that leads to state-of-the-art disentanglement performance on many public datasets.
HiSD processes all the conditions (i.e., tags, attributes and styles) in an independent strategy so that they can be controlled alone or on inter-conditions or intra-conditions. The model extracts styles easily from the reference images by converting them into latent codes and Gaussian noises. It adds the style to the input image without affecting its identity or other styles, tags and attributes.
Python implementation of HiSD
HiSD needs a Python environment and PyTorch framework to set up and run. Usage of a GPU runtime is optional. Pre-trained HiSD can be loaded and inference may be performed with a CPU runtime itself. Install dependencies using the following command.
!pip install tensorboardx
The following command downloads the source codes from the official repository to the local machine.
!git clone https://github.com/imlixinyang/HiSD.git
Output:
Change the directory to content/HiSD/
using the following command.
%cd HiSD/
Download the publicly available CelebAMask-HQ dataset from the google drive to the local machine to proceed further. Ensure that the train images are stored in the directory /HiSD/datasets
and their corresponding labels are stored in the directory /HiSD/labels
. The following command preprocesses the dataset for training.
!python /content/HiSD/preprocessors/celeba-hq.py --img_path /HiSD/datasets/ --label_path /HiSD/labels/ --target_path datasets --start 3002 --end 30002
The following command trains the model and fits the model configuration to the machine and dataset. It creates a new directory under the current path named ‘outputs’ to store its outputs.
!python core/train.py --config configs/celeba-hq.yaml --gpus 0
Once the dataset preprocessing and the model checkpoint restoration are finished, they can be used for similar applications. A sample implementation is carried out with the following simple python codes. First, import the necessary modules and libraries.
%cd /content/HiSD/ from core.utils import get_config from core.trainer import HiSD_Trainer import argparse import torchvision.utils as vutils import sys import torch import os from torchvision import transforms from PIL import Image import numpy as np import time import matplotlib.pyplot as plt
Download the checkpoint parquet file from the official page using the following command.
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1KDrNWLejpo02fcalUOrAJOl1hGoccBKl' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1KDrNWLejpo02fcalUOrAJOl1hGoccBKl" -O checkpoint_256_celeba-hq.pt && rm -rf /tmp/cookies.txt
Output:
Move the checkpoint parquet file to the /HiSD
directory using the following commands.
%cd /content/ !mv checkpoint_256_celeba-hq.pt HiSD/
Load the checkpoint and prepare the model for inference using the following codes.
device = 'cpu' config = get_config('configs/celeba-hq_256.yaml') noise_dim = config['noise_dim'] image_size = config['new_size'] checkpoint = 'checkpoint_256_celeba-hq.pt' trainer = HiSD_Trainer(config) # assumed CPU device # if GPU is available, set map_location = None state_dict = torch.load(checkpoint, map_location=torch.device('cpu')) trainer.models.gen.load_state_dict(state_dict['gen_test']) trainer.models.gen.to(device) E = trainer.models.gen.encode T = trainer.models.gen.translate G = trainer.models.gen.decode M = trainer.models.gen.map F = trainer.models.gen.extract transform = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
Define a function to perform the image-to-image translation.
def translate(input, steps): x = transform(Image.open(input).convert('RGB')).unsqueeze(0).to(device) c = E(x) c_trg = c for j in range(len(steps)): step = steps[j] if step['type'] == 'latent-guided': if step['seed'] is not None: torch.manual_seed(step['seed']) torch.cuda.manual_seed(step['seed']) z = torch.randn(1, noise_dim).to(device) s_trg = M(z, step['tag'], step['attribute']) elif step['type'] == 'reference-guided': reference = transform(Image.open(step['reference']).convert('RGB')).unsqueeze(0).to(device) s_trg = F(reference, step['tag']) c_trg = T(c_trg, s_trg, step['tag']) x_trg = G(c_trg) output = x_trg.squeeze(0).cpu().permute(1, 2, 0).add(1).mul(1/2).clamp(0,1).detach().numpy() return output
The following commands set the desired tags, the attributes and the styles to perform translation. They use in-built example images for translation. Users can opt for their own data images.
First example inference:
input = 'examples/input_0.jpg' # e.g.1 change tag 'Bangs' to attribute 'with' using 3x latent-guided styles (generated by random noise). steps = [ {'type': 'latent-guided', 'tag': 0, 'attribute': 0, 'seed': None} ] plt.figure(figsize=(12,4)) for i in range(3): plt.subplot(1, 3, i+1) output = translate(input, steps) plt.imshow(output, aspect='auto') plt.show()
Output:
Second example inference:
input = 'examples/input_1.jpg' plt.figure(figsize=(12,4)) # e.g.2 change tag 'Glasses' to attribute 'with' using reference-guided styles (extracted from another image). steps = [ {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_0.jpg'} ] output = translate(input, steps) plt.subplot(131) plt.imshow(output, aspect='auto') steps = [ {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_1.jpg'} ] output = translate(input, steps) plt.subplot(132) plt.imshow(output, aspect='auto') steps = [ {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_2.jpg'} ] output = translate(input, steps) plt.subplot(133) plt.imshow(output, aspect='auto') plt.show()
Output:
Third example inference:
input = 'examples/input_2.jpg' # e.g.3 change tag 'Glasses' and 'Bangs 'to attribute 'with', 'Hair color' to 'black' during one translation. steps = [ {'type': 'reference-guided', 'tag': 0, 'reference': 'examples/reference_bangs_0.jpg'}, {'type': 'reference-guided', 'tag': 1, 'reference': 'examples/reference_glasses_0.jpg'}, {'type': 'latent-guided', 'tag': 2, 'attribute': 0, 'seed': None} ] output = translate(input, steps) plt.figure(figsize=(5,5)) plt.imshow(output, aspect='auto') plt.show()
Output:
Performance of HiSD
HiSD is trained and evaluated on the famous CelebA-HQ dataset with 30,000 facial images of celebrities with labels of tags and attributes such as hair colour, presence of glasses, bangs, beard and gender. The first 3,000 images are used as test images, and the remaining 27,000 images are used as train images. Competitive models are also trained with the same dataset under identical device configurations for enabling comparison.
HiSD outperforms the current state-of-the-art models including SDIT, ELEGANT, and StarGANv2, on the FID scale (Frechet Inception Distance), which measures realism & FID scale that measures the disentanglement.
Note: Images and illustrations other than the code outputs are taken from the original research paper and the official repository.
Further reading
Subscribe to our Newsletter
Get the latest updates and relevant offers by sharing your email.Join Our Telegram Group. Be part of an engaging online community. Join Here.
Industry-Leading Brands Adopt Translations.com-Adobe Integration for Multilingual Content Management at Record Pace - Business Wire - Translation
NEW YORK & SAN JUAN, Puerto Rico--(BUSINESS WIRE)--Translations.com, the technology division of TransPerfect, the world’s largest provider of language and technology solutions for global business, today announced that more than 30 leading brands have implemented GlobalLink® Connect’s Adobe integrations to manage their global enterprise content in 2020. These integrations allow businesses to leverage GlobalLink's translation workflow management from within the user interface of Adobe applications.
“Adobe works closely with technology partners like Translations.com to help our customers take full advantage of their investment in our solutions,” said Nik Shroff, Senior Director, Global Technology Partners at Adobe. “For over twelve years, Translations.com has helped brands around the globe find new and interesting ways to scale, launch, and maintain multilingual digital experiences. As a Premier partner, Translations.com’s integration with Adobe Experience Cloud will continue to give our customers the ability to reach new markets faster than ever.”
Translations.com is a Premier Partner in the Adobe Exchange Program with more than 150 shared customers and over 12 years of experience. As a Platinum sponsor of this year’s Adobe Summit, the company will showcase success stories from Novo Nordisk, Honeywell, Amplifon, and GF Machining Solutions. Attendees can register for the session, webcast, and more at the dedicated Adobe Summit landing page.
GlobalLink Connect provides an end-to-end solution that manages all facets of the translation process. Many of Adobe’s offerings, including Adobe Experience Cloud and Adobe Creative Cloud, combine with GlobalLink Connect’s workflow capabilities to create a seamless plug-and-play solution with virtually no IT overhead. Users benefit from streamlined management and control over customer experiences in multiple languages.
GlobalLink Connect’s Adobe integrations include:
- Adobe Experience Manager
- Adobe Magento Commerce
- Adobe Marketo Engage
- Adobe Creative Cloud
- Adobe InDesign Server
- Adobe Component Content Management System
GlobalLink Connect features include:
- Scheduled or on-demand translations via Adobe’s UI
- Dashboard view of translation spend and other KPIs
- Internal or external vendor management
- Flexible workflows featuring machine translation, human translation, or both
- Rapid ROI via reduced IT involvement and project management overhead
Scott Rathburn, Global Localization Lead and Senior Content Editor from Haas Automation and a GlobalLink-Adobe integration user, commented, “GlobalLink is the foundation of Haas Automation’s global localization strategy. Since deploying in 2018, we have more than doubled our number of locales while reducing our time-to-market for new content and improving efficiencies company-wide. We change content monthly, if not weekly, and it’s frequently targeted by region. We simply could not do what we do without GlobalLink’s expansive capabilities and the phenomenal Translations.com support team behind it. We’re looking forward to Translations.com’s upcoming Adobe Summit session and webcast, and we’re excited to see what the future has in store.”
TransPerfect President and CEO Phil Shawe stated, “We are excited to highlight some of our most exciting joint success stories at the Adobe Summit. Adobe has been a key technology partner for us for over 12 years, and we look forward to expanding that partnership in the future. New joint customers have onboarded our GlobalLink-Adobe integrated solutions at a record pace in 2020 and, most importantly, those clients are benefiting from the ability to manage multilingual content directly from their Adobe user interface.”
About Translations.com
Translations.com is the world’s largest provider of enterprise localization services and technology solutions. From offices in over 100 cities on six continents, Translations.com offers a full range of services in 170+ languages to clients worldwide. More than 5,000 global organizations employ Translations.com’s GlobalLink® technology to simplify management of multilingual content. Translations.com is part of the TransPerfect family of companies, with global headquarters in New York and regional headquarters in London and Hong Kong. For more information, please visit www.translations.com.