[Originally published in the winter issue of BSFA’s Focus magazine]
Embracing randomness and thriving in chaos are human traits that we shouldn’t lose or give away. To quote David Bowie, “What we try to keep out of our existence is chaos, which is a very real part of our lives. And our refusal to accept chaos as being integral to our existence, I think, has been one of the greatest mistakes as a civilization that we’ve made.“
Increasingly we delegate decisions and responsibility to technology, underpinned by a drive for efficiency. Indeed, if our creations are superior to us, why wouldn’t we delegate most of life’s tasks to the machines?
That said, we assume that humans are replicas of the technology we create, a recurring theme in human history. In Ancient Rome thinkers such as Theodoret used an aqueduct metaphor and since we have described ourselves as cogs and wheels, and latterly as computers and artificial neural networks. We are habitually and often too literally tempted into understanding ourselves through the lens of our technology.
There are definitely aspects of day-to-day life that are performed more efficiently or effectively by technology, especially those that are dangerous or boring, that take a lot of processing or need vast pattern matching capability. These range widely across mundane production lines and car washes to the complexity of search and rescue robots and the electronic movement of money.
Turning to a wider scope, ‘Big Data’ gathers as much information as it can and then analyses it by looking for patterns, correlations and so on. This is proving highly effective for personalised and preventative healthcare where a person’s imminent health problems can be predicted sooner (and more cheaply). In the future, the resulting treatment could be personalised because the analytics will find instances of other patients with similar contributing factors, such as related health issues and lifestyles and can then vary the treatment accordingly. No longer will we have to accept the flawed approach of ‘one size fits all’ treatment.
As a near-future fiction writer, what interests me more is the delegation of the social aspects of life. Dating apps are a prime example of algorithmic alleyways, of tunnel vision. They ‘decide’ who you should meet by only showing you the people who, according to the analytics, you ‘match’. They have sifted out a whole bunch of people who might have changed your perspective on life. Talking of narrowing options, it is not beyond the imagination to think that search engines favour restaurants with good reviews when you ask for a list of local options. Taking that a step further, why wouldn’t google maps show you a route to your chosen eatery which takes you past the other shops that advertise with the mapping app?
I’m often amazed and disheartened when someone says that they know company x or y treats their workers badly and have atrocious ethics around sustainability, but continues to buy their products because their service is easy to use. It’s a case of convenience over conscience. Wishing a corporation would change while still using them is relinquishing our ethical agency to their technology. Naturally, we all have weak spots and the eagle-eyed among you will have spotted that my books are for sale on Amazon. As an aside, Cory Doctorow’s Chokepoint Capitalism is worth reading on the subject of creatives and our current form of capitalism.
Extrapolating the personalisation trend in society is crucial for a science fiction writer, especially when we ‘fast-forward’ in both good and bad directions.
For example, are we heading to a world where all entertainment is generated for us by artificial intelligence? A life of personalised music, art and literature, created as unique pieces for each individual and produced with your preferences in mind. Imagine a story or a piece of music written for you and only for you.
On one level this sounds perfect, we all get exactly what we want. But taken another way, there’s no shared experience and an ever-decreasing boundary around our ‘tastes’. Where are those magical moments of surprise?
As a writer, I can see a lot of value in exploring how this personalisation might be experienced by characters from different contexts or how a diverse range of neurotypes would react, given the variety of ways they might interpret and respond to social cues or situations.
What about a story that follows a typical day in the life of Sam Smith. They wake to a recommended diet for the day, based on their gut biome and aided by an automated delivery of the food they require. They walk to work along the route prescribed by their app which takes them down the street with sports shops rather than the one with doughnut shops. The same app is directing others along the route and helping Sam ‘bump into’ those it deems will make compatible partners. Sam is taken along the street of kids’ clothes and baby shops, heightening their desire for a child.
At work, the tasks are allocated based on the reaction of their gut biome to previous tasks. Sam is bored, but understands that their income is dependent on being efficient and highly productive. Back at home, the evening is spent with their grandmother who tells them stories of having a chat with her doctor who spotted a mental health issue which was not obviously linked to the physical problem she wanted advice on. Grandma also tells the tale of a friend encouraging her to imagine sitting on a warm beach with the sea lapping its waves in the background while being persuaded to taste mussels with her eyes closed, a food she had tried before and hated. She loved them and was eager to taste other foods she had thought she disliked. It was a turning point in grandma’s gastronomic gratification.
Finally, she recounts the same old story about taking a wrong turn on her way home. Desperate for the toilet, she went into a biker bar that she wouldn’t normally be seen dead in and there she met grandpa.
Not a particularly thrilling plot, but you get the idea and there are more positive ways of factoring optimisation into the story. Such as…
Some years later, identity and money are pretty much the same thing because the markers that identify who you are also act as the validators of the money you own. This means that the data trail that Sam leaves behind provides a rich source of actual activity. As the saying goes when you want to really know what’s been happening, ‘follow the money’.
Sam no longer relies on generalised AI predictions of what they’re most likely to need and enjoy. Now, their actual activity is followed forensically by the combined collection and analysis of all of their data by the company that ‘manages’ their identity on their behalf. An individualised guide to life has become possible, even if Sam wants that to include an element of randomness. When applications that are our interface with the world sift and sort what we see and what we are offered and minimise the choices we need to make we have optimised personalisation.
Using scales from zero to ten to determine how they feel, Sam can adopt any particular ‘persona’ for the day. They can set their own personality parameters and alter them whenever they wish. They choose the option that increases the randomness of their life until they’ve reached their personal limit of chaos, as reported by their bio-feedback. Imagine a life where you can control the degree of chaos to be compatible with your desires.
With all considerations of how technology might be good or bad for us, the reality is always going to be somewhere in the grey middle-ground inhabited by the messy humans and the flawed tech. That shouldn’t stop us imagining what might be and entertaining others with it.
For the past couple of years, I’ve been involved in a project with King’s College London looking at whether it’s possible to automate the analysis of how a mother talks about her young child in order to predict future mental health issues for the child.
This project raises a number of ethical considerations and my role has been to write two stories designed to help the general public understand the issues. Apart from the overarching ethics of the project, even if no technology was involved, there are the usual potential downsides of imperfect data leading to selection bias. This is made more difficult due to the nuances of cultural difference and the implications of how a person says a phrase. For example, one the stories, A Mother’s Nightmare, highlights the difficulty in determining whether the phrase a mother’s nightmare is said with warmth or hostility. Try saying it in different ways and you’ll see the problem. However, in a cash strapped health service, having a technology that can cheaply triage families to find those most likely to benefit from mental health services means that the NHS can increase its reach to those patients who wouldn’t normally get noticed. This has the potential to be a gateway app for use worldwide.
The second story for the project, Standard Deviations, touches on the permissions for use of data. Gathering mental health data from the whole population, and beyond for incoming migrants, would need to be mandatory to be of greatest use as you need a comprehensive database of everyone to effectively target people in need. And it’s likely that the most marginalised in society will be wary of sharing their data. The key question for everyone is whether the benefit of accurately targeted health care is worth the infringement of personal privacy, or intimacy, as it’s sometimes called.
Both of these stories can be found in my new collection, Extracting Humanity.
I hope that those who do imagine the future and those who make it happen can work together for the good of us all. Personally, I want a sprinkle of chaos over too much order — unless it’s applied to my healthcare, for example. However, I do realise that wanting any level of randomness in life is probably a sign of privilege, a sign that my basic needs are covered.
Recently, I re-read the novel Russian novel We by Yevgeny Zamyatin. It is roughly one hundred years old, but still extremely pertinent in the way that the citizens of OneState are willingly duped into accepting the perfection of mathematics as the underpinning method of social organisation. With this becoming more achievable by the power of Big Data, smart cities and what is loosely termed artificial intelligence, we should heed the warning and not passively accept big tech’s offering without scrutiny. In We, the population are told that happiness can be calculated, and today we are at a high risk of being hoodwinked into the same belief.
So, let’s have the nuanced discussion about how we can work alongside our technology, recognising that humans are better at certain things and machines at others. We must be careful that overblown scare stories about artificial intelligence do not distract us from discussing the real live issues of its use, such as the cementing of current bias in society into systems that are neither transparent nor accountable. These doomsday narratives also serve to propagate the myth that only the tech companies know how to save us. If we come together as a society to agree our future then we might avoid the need for a repeat of the Luddite movement and the subsequent repression by legal and military force against ordinary folk standing up for themselves.
The polarisation of views encouraged by our media and social media companies is at best unhelpful and at worst cynically dangerous. However, science fiction writers hold a particularly powerful position in this situation, having the ability to ask the questions and, most importantly, to inspire others to ask their own questions.
Whatever the world looks like as I write this in 2023, the line where human and machine competences meet will most likely have changed by the time you read it and will keep on changing. We need to regularly review and refresh what we are willing to delegate and do this with method and, perhaps ironically, not randomly. Let’s write the stories that help us choose our best collective future. After all, the future is ours and it’s up for grabs.
 Moonage Daydreams
image: Alexander Ant