Category Archives: Interesting tech

AI reflects the past not the future.

“Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead.”

Cathy O’Neil, Weapons of Math Destruction.

This is an easy book to read and it’s a difficult book to read. It’s easy because it’s well written with many real-life examples and extrapolations. It’s difficult because the examples show how pervasive and corrosive big data and machine learning has and can become.

However, the quote I’ve chosen gives an uplift of spirits; if humans take more interest, control and responsibility then the emerging world of artificial intelligence could be a good one.

I recommend this as essential reading for anyone with more than a passing interest in artificial intelligence and who wants to think a bit more about the ethical aspects of big data and machine learning.

After all as you’ve heard me say many times, ‘the future is ours and it’s up for grabs…’


photo credit: György Soponyai The Crystal Ball via photopin (license)

Am I AI?

Tonight, I’m on a panel on the future of storytelling at the Barbican FutureFest lates.

As part of the preparation I’ve been looking into stories written by AI and came across the recent “1 the Road” which is an interesting road trip written by an AI with a camera, a GPS unit, a microphone and a clock attached to a car. The ‘writer of the writer’ is Ross Goodwin.

I have also ‘co-written’ a short piece with Transformer – a modern neural network that completes your text. Below is a short piece we wrote together with me starting, Transformer adding more text and then me writing the next connecting piece. And so on. Transformer only knew about the most recent text so it was for me to try and create coherence across the whole thing. It was a fascinating experience – take a look at the result and see if you can spot which is mine and which is Transformer’s.


The Piece, by Stephen Oram and Transformer

An advanced artificial intelligence has uncovered a glitch in our social media news feeds. It found that posts have been attributed to the wrong people, causing at least one suicide.

What does that mean for the future of the human race? And will we ever be able to tell the difference?  Is this an automated glitch? Will we get rid of the news feed and adopt something new? Or will we never see anything new, or just a whole lot of the same? The short answer to that is: it doesn’t matter.  We have an online media universe that is largely populated with people who do not care what information is presented to them. If all we had was the same content over and over, it would still be a better and safer place.  They wouldn’t be in any danger of suicide. But we have a vast number of news feeds, and many are created by human beings, not robots.  There is nothing that prevents a system from coming up with an algorithm that automatically chooses who to share what information with and who doesn’t, based on a mixture of interests, biases, and the underlying ideology that it was created to propagate. The big question is whether this is a glitch or the beginning of something more sinister.

(To get a glimpse of our system in action, click this link to browse the “Masthead.” See our list of people who’ve taken us seriously in the past.)

How does this system work? In principle, it works like this. First, each user knows their identity, or “identity.” We share our identities to keep track of what’s going on. We also have a system in place to keep track of who you were before you were in Masthead and the person or people you interacted with before that. If the user is using this service to share stories with one another, we share their stories to build an ever-growing database that’s updated every time you share a story.

In the next step, our users know if their stories will be included in the machine learning system, and we know how to resist manipulation, unlike other systems such as that revealed by the advanced AI. We may not be able to detect all possible manipulations by a given system, but we also have an incentive to avoid manipulation because we want the users to enjoy their stories.

Machine Learning is a field known for being very open-source but this post is less about a technical overview like this, and more about a personal experience in learning machine learning in my current work. If you have any questions, concerns or just want to share what is going on with your team, I’d love to meet you.


Why not have a go at writing something with Transformer for yourself – I’d love to read the results, so feel free to post them in the comments below.


photo credit: Stanley Zimny (Thank You for 45 Million views) Listening to FDR via photopin (license)

Human Brain Organoids

In June this year I was invited to Oxford University by the International Neuroethics Society for a symposium on human brain organoids and other novel entities. As you can imagine it was a fascinating afternoon and another one of those moments when I felt as if science fiction could never be as strange as the real science itself.

There was talk of gastruloids, novel entities and chimeras. We discussed how to measure consciousness, the ethical valuation of moral status, developing a human brain inside an animal and that the closer we get to human brain surrogates the more pressing the ethical issues become.

You can read all about the symposium and watch some videos on the neuroethics society website.

Here’s a taster from the summary: Continue reading