Tag Archives: machine learning

Bodies, breeding, robots & work

Another Loving, Autonomous Agents, Boundless Bodies and Lasting Labour. What a wonderful mix of potential futures are wrapped up in the 2019 Virtual Futures’ Near-Future Fiction series and I’m very excited that, in the same way as the 2018 series, I’ll be co-curating the events with other authors. 

We’re not searching for stories set on fanciful alien worlds,  post-apocalyptic landscapes in which steam-punk bandits with laser guns are fighting mutated zombies, or that feature technology so hypothetical it is almost unimaginable. Our aim is to promote stories that think critically about the sorts of technological developments that are just over horizon, and provide a unique perspective on contemporary concerns related to the perceived trajectory of scientific innovation. 

Those of you who have heard me answer the often asked question, “do you write dystopia or utopia,” will know I don’t believe in such a simple view of the world. You’ll have heard me respond with the shorthand statement that one person’s utopia is often another’s dystopia. As our call for stories says, “science fiction is often the victim of this binary between utopia and dystopia – fiction in which all of our problems are fixed or created by a specific technology or technologies. In reality, our relationship with our technology never follows these simple categories – it is frequently a messier affair. Stories that seek to criticize, predict, or complicate realistically will be more successful than those intended to shock with apocalyptic visions or please with plastic paradises.”

Whether you’re an established or emerging author we’re keen to receive your stories; the deadline for submissions is 2 December 2018 and you can download the full guidelines from the Virtual Futures’ website.

If you’re interested in attending the events to hear the inevitable variety of futures our chosen authors create, then you can read more about the themes and book your place via eventbrite; the last series sold out so get in early and book your place now.

I’m really looking forward to reading all the submissions, writing a story for each theme and reading them to a live Virtual Futures audience.

And don’t forget, the future is ours and it’s up for grabs…


photo credit: Frits Ahlefeldt – FritsAhlefeldt.com global-trends-population-growth-culture-illustration-no-txt-by-frits-ahlefeldt via photopin (license)

When will humans change?

In the words of Frederik Pohl, my job as a science fiction author is, “to predict not the automobile but the traffic jam.”

I’m sure we all have mixed feelings about the future of robotics and artificial intelligence.  I certainly do and it’s such a broad subject that it’s no surprise emerging technologies and science generates big questions. What human activity we value and what it means to be human might not be new questions, but this could be the moment to assess them again.

Over the past few weeks I’ve been spending even more time than usual thinking and reading about robots and artificial intelligence. I’ve outlined some brief thoughts below, but there’s way too much to put into a short blog post like this and others have written whole books about it, not least Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence. Continue reading

The Human Black Box

“What do machine learning, deep machine learning and artificial intelligence have in common?”

“We believe them more than we believe our fellow humans.”

Is that true? 

When a doctor makes a diagnosis do we simply take it for granted they’ve got it right? Probably not. At the very least we’ll search all of our available sources of knowledge. That might mean asking our friends or friends of friends with similar experience or using Google to show us what it believes are the top relevant articles, which of course aren’t necessarily the wisest.

There’s a very high probability that we’ll gather information from a variety of sources and decide what to believe and what to discard. That is until we use the magic of machine learning where it all happens inside the algorithmic ‘black box’ and we simply have to believe.

This article in the New York Times suggests that humans are black boxes too; we don’t really understand how decisions are being made. This seems like a reasonable argument, but maybe what it tells us is that we shouldn’t trust algorithms any more than we should trust humans – ultimately we should decide for ourselves who and what to believe.

Or, does that simply lead to not trusting the experts?

A conundrum for sure, but not a new one.


photo credit: jaci XIII Psyche via photopin (license)