Bias in, bias out

Google Translate has developed an understanding of the meaning behind words so that it can translate directly from one language to another using the concepts behind phrases rather than a word by word translation.

This means it can be taught to translate from French to German and from German to Chinese and because it understands language at a conceptual level it can translate French into Chinese without going via German; it matches concepts not words.

Should we be worried by this latest revelation of a Neural Machine that has created its own internal language that nobody understands?

I’m not sure.

Imagine an algorithm to determine where to concentrate health-care research. If its inputs are biased towards one section of society, accidentally rather than by design, wouldn’t it develop a skewed view of the world?

Wouldn’t it favour some people over others?

Yes, but we already have a healthcare system that does that, don’t we? And, this could be less biased because it would be much more effective at using large volumes of data to determine the best outcome overall.

The difference is that in a world of “bias in, bias out” and opaque algorithms nobody, not even the creators, would know why it made the choices it did.

Maybe this is a price worth paying.

As this TechCrunch article says, “Neural networks may be complex, mysterious and little creepy, but it’s hard to argue with their effectiveness.”


photo credit: Adi Korndörfer … brilliant ideas via photopin (license)

Leave a Reply

Your email address will not be published. Required fields are marked *

to fool the pesky robots... * Time limit is exhausted. Please reload the CAPTCHA.