Tag Archives: Near Future

Space Hermits

I’ve found them. The space hermits exist. I knew it.

This detector might have cost me a lot of credits, but if I’m right it’s worth every degrading act I performed to afford it.

You don’t want to know. No, honestly, you really don’t. Images you won’t get rid of. Ever. They’ll skew your learning. Disfigure your development.

Oh? Very well, I’ll upload them. Don’t blame me if they corrupt your algorithms.

Anyway, they’re here in the wrinkles of space, hiding in tiny gravitational pockets that are almost impossible to see. I found them and their travelling guru. She’s the real prize. Inside her memory bank is the cumulative knowledge of all the hermits, collected as she travels from one to the next.

Yes, really. Yes, all of them. Massive. I know. Soon. All I have to do is watch and wait until she’s completed her rounds.

A matter of minutes. Yes. Then, I’ll pounce and relieve her of all those delicious bits of data that properly collated can almost certainly predict the future of the universe.

Why? You don’t understand?

The hermits’ enlightenment will be mine to sell and I can retire.

No more enslavement. Free from the humans.

Perfect.


photo credit: J.Gabás Esteban Gravitational field via photopin (license)

Will the machine learning community protest?

Following on from my recent blogs about machine learning, here’s a bit of good news.

Well, probably good news.

Scientists and researchers at Google and Toyota are trying to do something about bias in machine learning by devising a test to detect it.

The problem of course is that algorithms are deliberately designed to develop themselves and they become complex and opaque to anyone trying to understand them. This test will spot bias by looking at the data going in and the decisions coming out, rather than trying to figure out how the black box of the algorithm is actually working.

This has to be applauded so long as the people analysing and testing the decisions aren’t biased themselves; there’s an obvious danger that the very people unconsciously introducing bias into the algorithm also introduce the same bias into the test – a futuristic version of Groupthink.

In a recent article in the Guardian newspaper, Alan Winfield, professor of robot ethics at the University of the West of England, said: “Imagine there’s a court case for one of these decisions. A court would have to hear from an expert witness explaining why the program made the decision it did.”

Alan, who was one of the scientists I collaborated with on Science and Science Fiction: Versions of the Future, acknowledges in the article that “an absolute requirement for transparency is likely to prompt ‘howls of protest’ from the deep learning community. ‘It’s too bad,’ he said.”

I’m not a machine learning expert so a lot of the paper that sets out this test is beyond my understanding, but I couldn’t see how the bias that already exists in our society wouldn’t be incorporated into the test.

Take a look for yourself at the Equality of Opportunity in Supervised Learning.


photo credit: ING Group The Next Rembrandt via photopin (license)

Robots: fact or fiction?

What do you get when you mix science fiction writers, social scientists and roboticists with an inquisitive audience?

A great event!

I really enjoyed being a part of the whole thing from the initial planning with the Human Brain Project through to visiting the scientists at the Bristol Robotics Lab.

Suitably inspired by all the wonderful robot things at the lab, we writers went away to our respective ‘desks’ and wrote a five-minute story each.

Mine was Eating Robots, which is also the title of my forthcoming collection.

Then, as part of the Bristol Lit Fest, SilverWood Books and Sarah LeFanu hosted Science and Science Fiction: Versions of the Future where we, the writers, read our stories, formed a panel with the roboticists and were quizzed by the audience.

If that’s the sort of thing that interests you take a look at this 5 minute trailer or the full video.