Tag Archives: Artificial Intelligence

The Human Black Box

“What do machine learning, deep machine learning and artificial intelligence have in common?”

“We believe them more than we believe our fellow humans.”

Is that true? 

When a doctor makes a diagnosis do we simply take it for granted they’ve got it right? Probably not. At the very least we’ll search all of our available sources of knowledge. That might mean asking our friends or friends of friends with similar experience or using Google to show us what it believes are the top relevant articles, which of course aren’t necessarily the wisest.

There’s a very high probability that we’ll gather information from a variety of sources and decide what to believe and what to discard. That is until we use the magic of machine learning where it all happens inside the algorithmic ‘black box’ and we simply have to believe.

This article in the New York Times suggests that humans are black boxes too; we don’t really understand how decisions are being made. This seems like a reasonable argument, but maybe what it tells us is that we shouldn’t trust algorithms any more than we should trust humans – ultimately we should decide for ourselves who and what to believe.

Or, does that simply lead to not trusting the experts?

A conundrum for sure, but not a new one.


photo credit: jaci XIII Psyche via photopin (license)

Not even good for capitalism?

Shock horror!

Amazon has patented a way of tracking hand movement to monitor their workers’ performance. Nothing Amazon do should shock; they’re a corporation fighting for dominance in a capitalist world.

Maybe they are planning on tracking movement, comparing it against the efficiency algorithms and punishing the transgressors. Wouldn’t that be a shot in the foot though? It presumes that the optimum movement has been found and precludes those clever inventive humans from improving what they do. That can’t be good for leading edge capitalism, can it?

Or maybe they’re going to use the workers movements to train the machine learning robots of the future.

Whichever it is, it sends an unpleasant tingle down my spine.


photo credit: corno.fulgur75 13e Biennale de Lyon: La Vie Moderne 2015 via photopin (license)

Should we dumb down AI?

An article in Wired magazine – Don’t Make AI Artificially Stupid In the Name of Transparancy – suggests solutions to the governance of machine learning. 

For some reason, it reminded me of a story I read some years ago. In 1968 a three year experiment of not changing the clocks from BST resulted in fewer road traffic deaths; data suggested more people were injured in the darker mornings, but fewer people were injured in the lighter afternoons.

Although I can’t validate it, I was told that the reason the scheme was scrapped was because, despite there being fewer deaths overall, the media focussed on the ones that did happen as a result of the experiment.

It seems to me that we have a similar problem with artificial intelligence – we’re in danger of focussing on the errors not the benefits. Desperately trying to understand what went wrong and limiting its potential as a result. What the Wired article attempts to do is find solutions that mean we can make the most of AI rather than dumbing it down so we can understand it, and hence control it.

One of the major challenges for the media will be to give a balanced view, rather than taking the easy route of selling bad news. And, it’s also a challenge for us science fiction writers to portray nuanced futures that have both hints of hope and words of warning.

 


photo credit: campra Kader Attia, Untitled via photopin (license)