Tag Archives: Artificial Intelligence

Intelligence & Cuttlefish

Yesterday, I met up with Danbee Kim, a researcher at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour and Scientist-in-Residence at the Brighton Sea Life Centre.

This was the second time we’d met, this time with her labmates, and wow, what a wonderful, welcoming and extremely bright, in both senses of the word, bunch of people.

I was invited to a ‘teach-in’ for the lab about machine learning and I had one of those afternoons where you think you’re just about keeping up until someone asks a question and another layer of complexity is exposed. I loved it.

Danbee and I are collaborating on a story based around her research and the work of the centre. After four hours in her company I know more than I ever thought I would about the difficulties of defining intelligence, cuttlefish and what puts the deep into machine learning. 

I’m really hoping this is the start of a long collaboration because these guys are great, there’s loads more I’d like to understand and I reckon between us there’s a lot of near-future fiction waiting to be written.

Now, if you want to know a bit more about these things too you’ll have to come along to the Fitzrovia Festival event – Collaboration Works – where I’ll be reading the story and Danbee will be responding. You can then join in the Q&A and chat over a glass of wine at the end.


photo credit: q.phia cuttle, tanjung kusu-kusu, lembeh, indonesia, 2017 via photopin (license)

Cracks in the Code

Why are we enthralled and appalled by the idea of a robot apocalypse?

Getting lost in the fantasy of a film or a book, being petrified by the impending end of the world and knowing at the back of our minds that it’s not real is exciting and entertaining, but why are so many stories like this? Perhaps the reality of our world is too harsh to accept and this form of escapism helps. Or, it could simply be that it’s easier to write apocalyptic stories of dread and danger than it is aspirational and inspirational utopias. It’s certainly a lot easier than writing about the human race messily stumbling forwards in a, “can’t quite work out who the good guys are,” kind of way.

Over the past few months I’ve met and shared a stage with scientists, sometimes to talk about their work and sometimes for them to respond to my work. It’s been fascinating and I want to do a lot more of it.

During one of these meetings we talked about what was acceptable on the dystopian spectrum and, in their view, Black Mirror was good and Terminator was bad. In fact, I’ve been particularly struck by how these collaborations have counteracted the tendency towards writing apocalyptic and improbable stories. It’s almost as if there’s a symbiotic tension; writing science fiction with a view to highlighting the potential uses of scientists’ discoveries while in their presence also makes me write stories that are not only entertaining but also plausible. I have to say it’s a relief the reviews for my new collection, Eating Robots, have compared it favourably to Black Mirror and there’s no mention of Terminator.

Through work with the Bristol Robotics Laboratory, the Human Brain Project at Kings College and various events with Virtual Futures I’ve been exposed to science and art that I wouldn’t have normally come across. It’s these conversations and observations that have led me to believe that we shouldn’t be worrying so much about the robot apocalypse as looking for the cracks in the crevices of poorly written or inadvertently biased code.

When choosing what would be in Eating Robots and Other Stories I decided it was only right that the scientists I’ve been working with should have a voice so I invited them to contribute a response to whichever story they wanted. Christine Aicardi, a Senior Research Fellow at Kings College London, touches on this notion of the cracks in the code in her response to The Thrown Away Things. She says, “Instead, it may lurk where we don’t expect it, in the discarded and the obsolete, in the faulty lines of code of an ill-designed and unmaintained software – here, in the decision-making modules of the bric-a-brac.”

So, I’ve come to the conclusion that not only is the messy bit in-between the dystopic and the utopic more likely it’s also much more interesting.

If you feel the same then I’d love to hear your views on the stories in Eating Robots and if you happen to be in London on 6 June then please come along to the launch. I’ll be reading from the collection and Christine, among others, will be there to give their response.



photo credit: Eugen Naiman Green rock via photopin (license)

Breaking the Rules Is Not Allowed

Screaming white noise. Pitch black darkness.

What a way to be greeted into a new day.

Aiden felt around for the edge of his cardboard mattress. Beyond its frayed borders buried among the food scraps and his few discarded clothes was the nectar he craved. The withdrawal was intense as the nanobots issued their friendly warning that his addiction needed feeding for him to stay alive.

Fumbling around in the detritus of his life he found his last vial of nanobot nectar and gulped it down.

A pinpoint of bright light appeared. Then another. And another. And another. He blinked. The nanobots were working. A gradual shift from the oppressive white noise to the welcoming sounds of a city about its daily business.

As his sight returned he noticed the clock on the house control unit in which his robot waited while he slept.

‘Jessie. Why didn’t you wake me? I told you – 7am.’ Continue reading