In the words of Frederik Pohl, my job as a science fiction author is, “to predict not the automobile but the traffic jam.”
I’m sure we all have mixed feelings about the future of robotics and artificial intelligence. I certainly do and it’s such a broad subject that it’s no surprise emerging technologies and science generates big questions. What human activity we value and what it means to be human might not be new questions, but this could be the moment to assess them again.
Over the past few weeks I’ve been spending even more time than usual thinking and reading about robots and artificial intelligence. I’ve outlined some brief thoughts below, but there’s way too much to put into a short blog post like this and others have written whole books about it, not least Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence. Continue reading
It’s a well-known saying among writers that you have to read to write. I imagine that’s the same for any craft – the more you see of other people’s work the better your own becomes.
I’m in the fortunate position at the moment of being the lead-curator for a series of science fiction events themed around the near-future (links to them are on my future events page). This means that not only do I get to read all the submitted stories and choose the best with my co-curator, I also get to hear the authors read their stories on the night.
And, it may sound like a cliché, but it really is a privilege.
Talking of which, it’s also incredibly pleasing that Vector, the critical journal of the British Science Fiction Association, has published an article on the thinking behind these Near-Future Fiction events.
An article in Wired magazine – Don’t Make AI Artificially Stupid In the Name of Transparancy – suggests solutions to the governance of machine learning.
For some reason, it reminded me of a story I read some years ago. In 1968 a three year experiment of not changing the clocks from BST resulted in fewer road traffic deaths; data suggested more people were injured in the darker mornings, but fewer people were injured in the lighter afternoons.
Although I can’t validate it, I was told that the reason the scheme was scrapped was because, despite there being fewer deaths overall, the media focussed on the ones that did happen as a result of the experiment.
It seems to me that we have a similar problem with artificial intelligence – we’re in danger of focussing on the errors not the benefits. Desperately trying to understand what went wrong and limiting its potential as a result. What the Wired article attempts to do is find solutions that mean we can make the most of AI rather than dumbing it down so we can understand it, and hence control it.
One of the major challenges for the media will be to give a balanced view, rather than taking the easy route of selling bad news. And, it’s also a challenge for us science fiction writers to portray nuanced futures that have both hints of hope and words of warning.
photo credit: campra Kader Attia, Untitled via photopin (license)