Category Archives: Thoughts and speculation

Bias in, bias out

Google Translate has developed an understanding of the meaning behind words so that it can translate directly from one language to another using the concepts behind phrases rather than a word by word translation.

This means it can be taught to translate from French to German and from German to Chinese and because it understands language at a conceptual level it can translate French into Chinese without going via German; it matches concepts not words.

Should we be worried by this latest revelation of a Neural Machine that has created its own internal language that nobody understands?

I’m not sure.

Imagine an algorithm to determine where to concentrate health-care research. If its inputs are biased towards one section of society, accidentally rather than by design, wouldn’t it develop a skewed view of the world?

Wouldn’t it favour some people over others?

Yes, but we already have a healthcare system that does that, don’t we? And, this could be less biased because it would be much more effective at using large volumes of data to determine the best outcome overall.

The difference is that in a world of “bias in, bias out” and opaque algorithms nobody, not even the creators, would know why it made the choices it did.

Maybe this is a price worth paying.

As this TechCrunch article says, “Neural networks may be complex, mysterious and little creepy, but it’s hard to argue with their effectiveness.”


photo credit: Adi Korndörfer … brilliant ideas via photopin (license)

Machine Learning Algorithms

Artificial Inteliigence

Are machines that learn for themselves the stuff of nightmares or a vision of a wonderful utopian future?

The answer, of course, is neither.

We all know that technology is neutral, even though we forget a lot of the time. But there is that niggling doubt. What if they broke through the barrier and became sentient and intelligent?

It’s possible, but probably a long way off.

Artificial Intelligence and robots are hot topics for Science Fiction at the moment and I’m one of those who believe we should use fiction to help us imagine the future so we can be better prepared for it. Good or bad.

The more of us that have a basic understanding of how the tech works the richer the debate about how it’s used will be, so I was pleased to find some fun stuff from Google that starts to demystify machine learning.

Here’s an AI experiment that tests a neural network to see if it can guess what you’re sketching.

I’m rubbish at drawing but it guessed 2 out of my 5 doodles and as the designers say, “The more you play with it, the more it will learn.”

Take a look – https://aiexperiments.withgoogle.com/quick-draw

detect deceit and delete

I came across these two stories last week – there’s an algorithm that can detect deceit in your social media feed and Twitter has been telling people they don’t exist.

This led me to ponder what it would be like to be in charge of a social media company with a conscience.

Imagine you’re uncomfortable with providing a platform from which people tell lies that are stored for future generations as the accurate record of our social history.

If your algorithms can detect deceit and detect it more effectively than human beings – that’s the claim – then would you consider it your moral duty to find the lies and delete them all? Of course you’d have to trust the algorithms, and their creators, to not deceive you.

Would you delete everything that appeared to be a lie, no matter how big or small?

I wonder if Twitter is temporarily suspending accounts while it cleanses them.

Have you checked your social media history recently?

Maybe you should…


photo credit: 000109 via photopin (license)