It’s a well-known saying among writers that you have to read to write. I imagine that’s the same for any craft – the more you see of other people’s work the better your own becomes.
I’m in the fortunate position at the moment of being the lead-curator for a series of science fiction events themed around the near-future (links to them are on my future events page). This means that not only do I get to read all the submitted stories and choose the best with my co-curator, I also get to hear the authors read their stories on the night.
And, it may sound like a cliché, but it really is a privilege.
Talking of which, it’s also incredibly pleasing that Vector, the critical journal of the British Science Fiction Association, has published an article on the thinking behind these Near-Future Fiction events.
An article in Wired magazine – Don’t Make AI Artificially Stupid In the Name of Transparancy – suggests solutions to the governance of machine learning.
For some reason, it reminded me of a story I read some years ago. In 1968 a three year experiment of not changing the clocks from BST resulted in fewer road traffic deaths; data suggested more people were injured in the darker mornings, but fewer people were injured in the lighter afternoons.
Although I can’t validate it, I was told that the reason the scheme was scrapped was because, despite there being fewer deaths overall, the media focussed on the ones that did happen as a result of the experiment.
It seems to me that we have a similar problem with artificial intelligence – we’re in danger of focussing on the errors not the benefits. Desperately trying to understand what went wrong and limiting its potential as a result. What the Wired article attempts to do is find solutions that mean we can make the most of AI rather than dumbing it down so we can understand it, and hence control it.
One of the major challenges for the media will be to give a balanced view, rather than taking the easy route of selling bad news. And, it’s also a challenge for us science fiction writers to portray nuanced futures that have both hints of hope and words of warning.
photo credit: campra Kader Attia, Untitled via photopin (license)
I was struck recently by a piece in Nature: the international journal of science on what science fiction has to offer a world where technology and power structures are rapidly changing.
As the headline says, “With technological change cranked up to warp speed and day-to-day life smacking of dystopia, where does science fiction go? Has mainstream fiction taken up the baton?”
It’s a fairly widely held view that sci-fi doesn’t predict the future very well, but it’s good at helping us think about on our own humanity in a changing world and some of the articles reflect on this.
We might be rubbish at predicting the future because technology doesn’t develop in a straight line, but many of the scientists I’ve spoken to will tell you about the sci-fi that inspired them. Although, I guess that’s influencing rather than predicting.
Something that I didn’t pick up in the articles that I think is important is whether we would be so sensitive to real-life ‘dystopia’ if we hadn’t had hugely popular sci-fi such as Nineteen Eighty Four, Brave New World, Blade Runner and more recently Black Mirror.
Have these works of science fiction made us more attuned to the attempts to manipulate us, or more wary of how technology might go wrong once you mix the messiness of humanity with the cracks in the code?
I think they have, I think they give us cautionary tools.
Whatever your view on science fiction these six articles by leading sci-fi writers are well worth a read.
photo credit: creative heroes The Supervision – Stop Mass Surveillance! via photopin (license)