Tag Archives: cyborg

Cracks in the Code

Why are we enthralled and appalled by the idea of a robot apocalypse?

Getting lost in the fantasy of a film or a book, being petrified by the impending end of the world and knowing at the back of our minds that it’s not real is exciting and entertaining, but why are so many stories like this? Perhaps the reality of our world is too harsh to accept and this form of escapism helps. Or, it could simply be that it’s easier to write apocalyptic stories of dread and danger than it is aspirational and inspirational utopias. It’s certainly a lot easier than writing about the human race messily stumbling forwards in a, “can’t quite work out who the good guys are,” kind of way.

Over the past few months I’ve met and shared a stage with scientists, sometimes to talk about their work and sometimes for them to respond to my work. It’s been fascinating and I want to do a lot more of it.

During one of these meetings we talked about what was acceptable on the dystopian spectrum and, in their view, Black Mirror was good and Terminator was bad. In fact, I’ve been particularly struck by how these collaborations have counteracted the tendency towards writing apocalyptic and improbable stories. It’s almost as if there’s a symbiotic tension; writing science fiction with a view to highlighting the potential uses of scientists’ discoveries while in their presence also makes me write stories that are not only entertaining but also plausible. I have to say it’s a relief the reviews for my new collection, Eating Robots, have compared it favourably to Black Mirror and there’s no mention of Terminator.

Through work with the Bristol Robotics Laboratory, the Human Brain Project at Kings College and various events with Virtual Futures I’ve been exposed to science and art that I wouldn’t have normally come across. It’s these conversations and observations that have led me to believe that we shouldn’t be worrying so much about the robot apocalypse as looking for the cracks in the crevices of poorly written or inadvertently biased code.

When choosing what would be in Eating Robots and Other Stories I decided it was only right that the scientists I’ve been working with should have a voice so I invited them to contribute a response to whichever story they wanted. Christine Aicardi, a Senior Research Fellow at Kings College London, touches on this notion of the cracks in the code in her response to The Thrown Away Things. She says, “Instead, it may lurk where we don’t expect it, in the discarded and the obsolete, in the faulty lines of code of an ill-designed and unmaintained software – here, in the decision-making modules of the bric-a-brac.”

So, I’ve come to the conclusion that not only is the messy bit in-between the dystopic and the utopic more likely it’s also much more interesting.

If you feel the same then I’d love to hear your views on the stories in Eating Robots and if you happen to be in London on 6 June then please come along to the launch. I’ll be reading from the collection and Christine, among others, will be there to give their response.



photo credit: Eugen Naiman Green rock via photopin (license)

Will the machine learning community protest?

Following on from my recent blogs about machine learning, here’s a bit of good news.

Well, probably good news.

Scientists and researchers at Google and Toyota are trying to do something about bias in machine learning by devising a test to detect it.

The problem of course is that algorithms are deliberately designed to develop themselves and they become complex and opaque to anyone trying to understand them. This test will spot bias by looking at the data going in and the decisions coming out, rather than trying to figure out how the black box of the algorithm is actually working.

This has to be applauded so long as the people analysing and testing the decisions aren’t biased themselves; there’s an obvious danger that the very people unconsciously introducing bias into the algorithm also introduce the same bias into the test – a futuristic version of Groupthink.

In a recent article in the Guardian newspaper, Alan Winfield, professor of robot ethics at the University of the West of England, said: “Imagine there’s a court case for one of these decisions. A court would have to hear from an expert witness explaining why the program made the decision it did.”

Alan, who was one of the scientists I collaborated with on Science and Science Fiction: Versions of the Future, acknowledges in the article that “an absolute requirement for transparency is likely to prompt ‘howls of protest’ from the deep learning community. ‘It’s too bad,’ he said.”

I’m not a machine learning expert so a lot of the paper that sets out this test is beyond my understanding, but I couldn’t see how the bias that already exists in our society wouldn’t be incorporated into the test.

Take a look for yourself at the Equality of Opportunity in Supervised Learning.


photo credit: ING Group The Next Rembrandt via photopin (license)

Bias in, bias out

Google Translate has developed an understanding of the meaning behind words so that it can translate directly from one language to another using the concepts behind phrases rather than a word by word translation.

This means it can be taught to translate from French to German and from German to Chinese and because it understands language at a conceptual level it can translate French into Chinese without going via German; it matches concepts not words.

Should we be worried by this latest revelation of a Neural Machine that has created its own internal language that nobody understands?

I’m not sure.

Imagine an algorithm to determine where to concentrate health-care research. If its inputs are biased towards one section of society, accidentally rather than by design, wouldn’t it develop a skewed view of the world?

Wouldn’t it favour some people over others?

Yes, but we already have a healthcare system that does that, don’t we? And, this could be less biased because it would be much more effective at using large volumes of data to determine the best outcome overall.

The difference is that in a world of “bias in, bias out” and opaque algorithms nobody, not even the creators, would know why it made the choices it did.

Maybe this is a price worth paying.

As this TechCrunch article says, “Neural networks may be complex, mysterious and little creepy, but it’s hard to argue with their effectiveness.”


photo credit: Adi Korndörfer … brilliant ideas via photopin (license)