Author Archives: Stephen Oram

Unicorn – funny or not?

Every now and again I submit something to one of the online magazines that publish very short pieces. It’s partly the challenge of condensing down to so few words, and they’re often a lot of fun too.

Recently, I submitted on the theme of Unicorns. I wasn’t surprised when they turned down the piece as I did stretch the intention behind the theme, quite a bit. But, from their feedback, I’m not sure they got the humour.

Oh well, here it is for your enjoyment…

Transgenic

Reg throws another rock. One rock that joins the many pelting the deniers’ citadel.

Angry mob or legitimate protest; choose your side.

The Unicorn exists and it’s far too dangerous to be kept secret.

We, the protesters, protest and the rocks hail down.

Why was it brought into existence? It’s incomprehensible.

‘No GM. No GM,’ we chant.

A man steps from the glass fortress and is struck on the head by several rocks. The blood is disgusting. And, so is he.

Releasing Unicorn, the genetically modified corn designed to wipe out all other corn is unforgivable.

Reg throws another rock.


photo credit: Madame Licorne photos officielles via photopin (license)

The Human Black Box

“What do machine learning, deep machine learning and artificial intelligence have in common?”

“We believe them more than we believe our fellow humans.”

Is that true? 

When a doctor makes a diagnosis do we simply take it for granted they’ve got it right? Probably not. At the very least we’ll search all of our available sources of knowledge. That might mean asking our friends or friends of friends with similar experience or using Google to show us what it believes are the top relevant articles, which of course aren’t necessarily the wisest.

There’s a very high probability that we’ll gather information from a variety of sources and decide what to believe and what to discard. That is until we use the magic of machine learning where it all happens inside the algorithmic ‘black box’ and we simply have to believe.

This article in the New York Times suggests that humans are black boxes too; we don’t really understand how decisions are being made. This seems like a reasonable argument, but maybe what it tells us is that we shouldn’t trust algorithms any more than we should trust humans – ultimately we should decide for ourselves who and what to believe.

Or, does that simply lead to not trusting the experts?

A conundrum for sure, but not a new one.


photo credit: jaci XIII Psyche via photopin (license)

Not even good for capitalism?

Shock horror!

Amazon has patented a way of tracking hand movement to monitor their workers’ performance. Nothing Amazon do should shock; they’re a corporation fighting for dominance in a capitalist world.

Maybe they are planning on tracking movement, comparing it against the efficiency algorithms and punishing the transgressors. Wouldn’t that be a shot in the foot though? It presumes that the optimum movement has been found and precludes those clever inventive humans from improving what they do. That can’t be good for leading edge capitalism, can it?

Or maybe they’re going to use the workers movements to train the machine learning robots of the future.

Whichever it is, it sends an unpleasant tingle down my spine.


photo credit: corno.fulgur75 13e Biennale de Lyon: La Vie Moderne 2015 via photopin (license)