Tag Archives: AI

Will the machine learning community protest?

Following on from my recent blogs about machine learning, here’s a bit of good news.

Well, probably good news.

Scientists and researchers at Google and Toyota are trying to do something about bias in machine learning by devising a test to detect it.

The problem of course is that algorithms are deliberately designed to develop themselves and they become complex and opaque to anyone trying to understand them. This test will spot bias by looking at the data going in and the decisions coming out, rather than trying to figure out how the black box of the algorithm is actually working.

This has to be applauded so long as the people analysing and testing the decisions aren’t biased themselves; there’s an obvious danger that the very people unconsciously introducing bias into the algorithm also introduce the same bias into the test – a futuristic version of Groupthink.

In a recent article in the Guardian newspaper, Alan Winfield, professor of robot ethics at the University of the West of England, said: “Imagine there’s a court case for one of these decisions. A court would have to hear from an expert witness explaining why the program made the decision it did.”

Alan, who was one of the scientists I collaborated with on Science and Science Fiction: Versions of the Future, acknowledges in the article that “an absolute requirement for transparency is likely to prompt ‘howls of protest’ from the deep learning community. ‘It’s too bad,’ he said.”

I’m not a machine learning expert so a lot of the paper that sets out this test is beyond my understanding, but I couldn’t see how the bias that already exists in our society wouldn’t be incorporated into the test.

Take a look for yourself at the Equality of Opportunity in Supervised Learning.


photo credit: ING Group The Next Rembrandt via photopin (license)

Bias in, bias out

Google Translate has developed an understanding of the meaning behind words so that it can translate directly from one language to another using the concepts behind phrases rather than a word by word translation.

This means it can be taught to translate from French to German and from German to Chinese and because it understands language at a conceptual level it can translate French into Chinese without going via German; it matches concepts not words.

Should we be worried by this latest revelation of a Neural Machine that has created its own internal language that nobody understands?

I’m not sure.

Imagine an algorithm to determine where to concentrate health-care research. If its inputs are biased towards one section of society, accidentally rather than by design, wouldn’t it develop a skewed view of the world?

Wouldn’t it favour some people over others?

Yes, but we already have a healthcare system that does that, don’t we? And, this could be less biased because it would be much more effective at using large volumes of data to determine the best outcome overall.

The difference is that in a world of “bias in, bias out” and opaque algorithms nobody, not even the creators, would know why it made the choices it did.

Maybe this is a price worth paying.

As this TechCrunch article says, “Neural networks may be complex, mysterious and little creepy, but it’s hard to argue with their effectiveness.”


photo credit: Adi Korndörfer … brilliant ideas via photopin (license)

Machine Learning Algorithms

Artificial Inteliigence

Are machines that learn for themselves the stuff of nightmares or a vision of a wonderful utopian future?

The answer, of course, is neither.

We all know that technology is neutral, even though we forget a lot of the time. But there is that niggling doubt. What if they broke through the barrier and became sentient and intelligent?

It’s possible, but probably a long way off.

Artificial Intelligence and robots are hot topics for Science Fiction at the moment and I’m one of those who believe we should use fiction to help us imagine the future so we can be better prepared for it. Good or bad.

The more of us that have a basic understanding of how the tech works the richer the debate about how it’s used will be, so I was pleased to find some fun stuff from Google that starts to demystify machine learning.

Here’s an AI experiment that tests a neural network to see if it can guess what you’re sketching.

I’m rubbish at drawing but it guessed 2 out of my 5 doodles and as the designers say, “The more you play with it, the more it will learn.”

Take a look – https://aiexperiments.withgoogle.com/quick-draw