Machine Learning & Linguistic Developments to Keep an Eye on in 2018

Localization processes have definitely come a long way in the last few years, thanks mainly to the availability of technological tools that are specifically designed to support them. Most of these advancements have occurred within the areas of ML (Machine Learning) and AI (Artificial Intelligence), resulting in quicker turnaround times, improved translation quality and reduced costs.

This has freed up budgets, allowing organizations to explore other opportunities.

The previous year, especially, has been an action-filled one with regard to the developments in ML or AI. These developments have had a tremendous impact on various industry verticals, resulting in significant revenue gains.

The localization industry is part of this list, but being a relatively newer entry, there are several more AI and ML advancements that it will be exposed to in the coming months. Let’s take a look what these advancements are.

Neural Machine Translation

Now, NMT isn’t exactly “new”. It’s been there for quite some time. However, it still offers that
“first-mover advantage” for businesses in terms of localization.

There is a growing need for more efficient ways to deliver content in multiple languages. NMT is meeting this demand by shifting from niche applications meant for large enterprises to the
mainstream environment.

NMT leverages voluminous training data and deep learning to construct an artificial neural network. By doing so, it can decipher patterns and look for contextual clues to translate content accurately and quickly. This is done with almost 0 human intervention.

NMT can go through extremely large datasets to decipher even the most complex patterns to achieve accurate results.

However, this doesn’t mean we won’t be needing human translation. But, post-editing processes will become significantly simpler thanks to NMT. This will allow post-editors to focus on ignored areas such as brand standards, output quality, and certain creative aspects.

Linguistic Workflow Optimization

Machine Learning can now optimize content management systems by matching projects to linguists best suited for the job. This is done by leveraging linguistic big data. ML can identify linguists who possess a better expertise with regard to specific kinds of content and assign specific projects to them. This results in higher quality translations.

These ML algorithms can be applied across the quality cycle. Beginning with Quality Analysis, ML can, apart from identifying the right linguist, identify the right linguistic resources as well, such as glossaries and translation style guides. Once it identifies the right resources, it immediately notifies translators about possible issues in translatability.

This helps ensure that everything is perfect right from the start. ML can also leverage quality control components to eliminate readability issues, fix linguistic register levels between the source and target content, and solve translation errors.

Automated QA and Interpretation

As the amount of content continues to grow, it is becoming increasingly difficult to deliver language quality. The amount of content growth is just too much to manage, but, at the same time, we need to make the most of it.

This challenge can be overcome through the automation of predictable language quality checks. Automated Language QA is a collaborative and highly effective tool aimed at quality control. It maximizes scalability, productivity, and quality at reduced costs.

Automated QA engines leverage pattern recognition and other capabilities to look for issues, such as inconsistencies in terminology or missing/broken links. By doing so, these engines assist linguists in identifying and fixing problems as soon as possible.
This technology is highly capable and delivers results that are significantly superior to that of human review.

Then, we have automated interpretation, which has brought us closer to combining text-to-speech tech and ML.

Share this post:

Related Reading