In June this year I was invited to Oxford University by the International Neuroethics Society for a symposium on human brain organoids and other novel entities. As you can imagine it was a fascinating afternoon and another one of those moments when I felt as if science fiction could never be as strange as the real science itself.
There was talk of gastruloids, novel entities and chimeras. We discussed how to measure consciousness, the ethical valuation of moral status, developing a human brain inside an animal and that the closer we get to human brain surrogates the more pressing the ethical issues become.
You can read all about the symposium and watch some videos on the neuroethics society website.
Here’s a taster from the summary: Continue reading
It’s been a very busy few months, you just need to look at my events page to see what I mean. Guess what? Every time I’ve set aside some time to sit down and write a few words about my experiences something else crops up and the chance slip by.
Although it’s a bit late, here’s a very short reflection on my ongoing collaboration with King’s College London and the Human Brain Project. It’s called ‘Transforming Future Science through Science Fiction.’ Continue reading
“What do machine learning, deep machine learning and artificial intelligence have in common?”
“We believe them more than we believe our fellow humans.”
Is that true?
When a doctor makes a diagnosis do we simply take it for granted they’ve got it right? Probably not. At the very least we’ll search all of our available sources of knowledge. That might mean asking our friends or friends of friends with similar experience or using Google to show us what it believes are the top relevant articles, which of course aren’t necessarily the wisest.
There’s a very high probability that we’ll gather information from a variety of sources and decide what to believe and what to discard. That is until we use the magic of machine learning where it all happens inside the algorithmic ‘black box’ and we simply have to believe.
This article in the New York Times suggests that humans are black boxes too; we don’t really understand how decisions are being made. This seems like a reasonable argument, but maybe what it tells us is that we shouldn’t trust algorithms any more than we should trust humans – ultimately we should decide for ourselves who and what to believe.
Or, does that simply lead to not trusting the experts?
A conundrum for sure, but not a new one.
photo credit: jaci XIII Psyche via photopin (license)