Menu Close
Robot figurines
Shutterstock

The problem with machine translation: beware the wisdom of the crowd

According to collective intelligence evangelist and journalist James Surowiecki, groups are much better at making predictions than the individuals who belong to those groups, be they novices or leading experts.

To illustrate this theory, Surowiecki shares a story in his 2004 book, The Wisdom of Crowds, about Sir Francis Galton, a British statistician who made an astonishing discovery while attending a country fair at the turn of the 20th century.

During the fair, there was a contest in which participants were asked to guess the weight of an ox. There were 787 entries, which Galton analysed upon returning home.

A cow with a red ribbon pinned to its harness
And the winner is… everyone. acceptphoto/Shutterstock

He was surprised to find that the median of all the entries was not only more accurate than the individual estimates of the butchers and farmers, who were supposed to have a keen eye for this kind of estimating, but also that this median was just a single pound off the animal’s exact weight.

Galton would go on to publish his findings in the journal Nature, explaining the idea of vox populi: the best decisions are often those made by large groups.

Strength in numbers

Let’s compare Francis Galton’s anecdote to university courses for professional translators, in which participants have the opportunity to share their insights and clever finds, which they dissect, discuss, and critique as a group.

They arrange the best solutions into a final version, an ensemble of each individual contributor’s most inspired ideas. This translation, a team effort, will invariably be higher quality than participants’ individual work, no matter how talented they might be.

By extension, we might ask ourselves: might machine translation, whose statistical model more or less mimics the collective intelligence formula, replace real-life human translators? In the era of artificial intelligence, might we leverage our strength in numbers to translate, as if the Internet were a massive classroom, an enormous group project, our very own dream team with millions of members, a place where every translated text could serve as inspiration?

While seemingly brilliant on paper, I must start by disappointing automation evangelists.

The Internet is full of specialists, but they are but a drop in an ocean of generalists who also have something to say about how a given text should be translated. AI tries its best to put the sources it identifies as reliable (say, major organisations or reputable companies) at the top. But instead of asking for the truth, it asks for the opinion of the entire planet, indeed anyone who has written and published anything online.

If we continue to use the country fair analogy, this would be like not only asking everyone on earth for their opinion, for better and for worse, it would almost be like if everyone were also guessing without even identifying the creature they’re looking at, since computers can’t assign meaning to the solutions they find. They would certainly have a statistical idea of what animal it is, based on the features the machine detects, but not an exact match.

So, in addition to guesses about cattle breeds, you could potentially also get guesses about every animal on Earth, from fleas to blue whales, with all of the inconsistencies that would cause.

A large brown cow and a white baby goat
It pays to compare like with like. Shutterstock

Finally, and most importantly, collaborative human translations are always subject to a certain amount of shepherding, whether by the professor or presenter, who guides the group and makes the final call. In other words, a higher power sorts through the solutions from the critical mass of translators and provides the guardrails that keep the process on track. When using machine translation without human intervention, these guardrails aren’t there.

Mr Shithole goes to jumpsuit

There are, of course, a few safeguards that keep machine translation in check. The words themselves are usually a good indicator of the likely meaning of a sentence. Next, there’s the context, which neural technologies now account for, narrowing the range of possible words to certain large families.

In our cattle example, the search would be corralled by the most basic engines to include large barnyard animals and by the most sophisticated ones to just bovine breeds. Nevertheless, given the difference between a small Angus calf and a big Charolais bull, the margin of error could still be high.

It’s no wonder, then, that otherwise fluent-sounding sentences might omit meaningful information or be peppered with offensive errors, words that crop up out of nowhere, or gender bias.

Sometimes, the meaning might be completely flipped: since translation engines are unable to “understand” what sentences mean, they opt for the statistically likeliest solution, which could be the opposite of what the original says.

In this study, the headline, “UK car industry in brace position ahead of Brexit deadline,” was translated as “L’industrie automobile britannique en position de force avant l’échéance du Brexit.” The original English sentence means the UK car industry is fearing the worst (and placing itself in a defensive position, like passengers on a plane before a crash). Conversely, the French translation says the opposite: that the UK car is in a position of power (en position de force).

In other words, proceed with caution, because no matter how fluent the suggested translation appears, these types of errors (incorrect terminology, omissions, mistranslations) abound in machine translation output.

My colleague Ben Karl has shared a few examples on his website, including one where Mexico’s official tourism website (automatically) translated the name of the upscale beachside resort town of Tulum as “jumpsuit.”

Another incredible gem: the name of the president of the People’s Republic of China being elegantly translated from Burmese to English as Mr. Shithole.

Normalisation and levelling out

Another issue with machine translation which people may be less aware of is a process known as normalisation. If new translations are only ever made using existing ones, over time, the process can stifle inventiveness, creativity, and originality, as several scientific studies have demonstrated.

Scholars also talk about “algorithmic bias”: where machines are more likely to suggest a given term the more it is used to translate a certain word. The result is that less frequent (and therefore more creative) translations are blotted out.

Pages from a book stuck to a wall
Automatic translation puts the richness of literary texts at risk. seasoning_17/Shutterstock

Machines don’t try to make texts sound pretty or play with the poetry of the words – simply conveying the meaning will suffice. This levelling out, a sort of homogenisation, be it cultural, stylistic or ideological, can be a particular problem for literary texts, which by their very nature deviate from the norm and develop a distinct linguistic flavour.

An excellent article on levelling out by translator Françoise Wuilmart, written more than a decade before the emergence of neural machine translation, sounds particularly prescient today:

“Levelling out hits at the very core of what makes literary translation so hard. To level out or ‘normalize’ a text is to dull or dampen it, flatten its natural relief, lob off its pointy bits, fill in its grooves, and iron out all the wrinkles that make it a literary text in the first place.”

This is precisely what machine translation does, whether intentionally or not. The tecnhology creates a vicious circle that, over time, leads to language impoverishment: the machine produces increasingly standardised texts, which are then used as the input to train other engines, which further level out the texts, and so on.

Studies have shown that machine-translated texts are less lexically rich. Exposing ourselves to increasingly homogenous language means hobbling our ability to express ourselves, and therefore our thoughts.

Human expertise in indispensable

Everyone in the translation industry today recognizes that it is undergoing a technological shift. Machine translation is clearly being used more and more, and its raw output is becoming increasingly usable.

However, too many users forget that automatically translated content has the potential to be rife with all kinds of errors, and that mistakes can be lurking everywhere among seemingly fluent and coherent sentences.

Expert translation professionals are uniquely equipped to assess the quality of this raw output. Only real-life humans can decide whether to use machine translation or not, like photographers picking the best camera for the conditions or accountants choosing the data entry method best suited to how they work.

Translation, like all professions, can’t escape a certain amount of automation. We could in fact be excited about this change, which can help professionals let their expertise shine, avoid repetitive tasks, and focus on where they can add the most value.

But caution is more important than ever, and indiscriminate use of machine translation should be avoided.

Real professionals will choose the best way to work with you depending on your priorities and the famous time – budget – quality trio. As your savvy linguistic and cultural consultants, they will be the key to ensuring flawless multilingual communication.

Like the butcher who actually won the contest at the country fair in Plymouth in 1906 would undoubtedly have said, human expertise is the only way you can be sure to hit the bullseye every single time.

This article was originally published in French

Want to write?

Write an article and join a growing community of more than 155,900 academics and researchers from 4,513 institutions.

Register now