[Originally published in the winter issue of BSFA’s Focus magazine]
Philip K. Dick’s futuristic judicial system in Minority Report relies on crimes being predicted and then prevented with pre-crime prosecution. A fascinating idea in itself, but the more interesting thread in the story for me is the human desire to protect belief in a system which is known to be defective.
The justice in Minority Report is certainly not free, fair or flawless; could a future system deliver those admirable aspirations? That’s the question I wish to explore, using the collective wisdom in All Tomorrow’s Futures as a launchpad.
We may disagree on what should be classed as a crime, but I assume we’re all familiar with crime as a concept and almost all of us will want to live within a legal framework and adhere to the rule of law.
Sometimes the system goes wrong and what we do about that can be contentious. When I was writing my story for the section on police and justicein All Tomorrow’s Futures, the news was emerging that Andrew Malkinson’s wrongful conviction had been quashed, a long time after DNA evidence that contradicted his guilt had been discovered. I was thinking about the seemingly deep reluctance of the authorities to admit their errors, making me consider whether an automated system might be more willing to acknowledge errors and correct them quicker – hence the story’s title “Ego Statistical”.
I was also fascinated with the public longing to pay lower taxes, while wanting every crime investigated and punished. This can only happen if the cost per prosecution is reduced, most likely through automation. However, one of the most worrying aspects of automation, and artificial intelligence in particular, is bias.
Given that you can find criminality wherever you look for it and the resulting arrests encourage the search for more crime, we are in danger of entering a spiral of confirmation bias, turbocharged by automation.
These types of spirals are commonly countered by a willingness to find and use better data. More representative data could alleviate and even reverse bias, but convincing the marginalised to share their data with the authorities that they perceive to be prejudiced against them would be difficult. The time and effort it would take compared to the pressing need to automate is likely to mean that they would not be ‘won over’ before an automated system was deployed.
Therefore, to give us any chance of a fair system we need to recognise the pressing need for public debate around how representative the training data needs to be before an AI is deployed.
What might this mean for a future justice system? For example, would we accept the idea that data should be deliberately manipulated to rebalance the bias we know exists? If we do, what might a story along those lines look like? In “Poisoning Prejudice” I explore the idea of placing faked data in a crime prediction AI so that the police turn their attention to middle-class areas of the protagonist’s city, and find drunken driving, recreational drug taking and eventually corporate crime, leaning into the idea that the more you look, the more you find.
There is also a strong argument that a lot of the current prejudice, and hence bias, in our systems, whether human or machine, is caused by a focus on money, the systemic lack of it or the personal greed for more. This in turn creates an orientation towards the path of least resistance, the most efficient method of finding unreported crimes.
It doesn’t have to be that way. In Tehnuka’s story, “Updated Intelligence”, the cops have badge bots that monitor them and challenge biased language and actions – the bots training the humans.
Or, how about a future where we use quotas to ensure that the same percentage of the population in different demographic groupings are prosecuted for similar crimes. For example, if 5% of a low-income area of London such as Barking & Dagenham is prosecuted for robbery, then 5% of the middle-income Camden and 5% of the top-income Richmond are also convicted of robbery.
This leads to two extrapolations.
Firstly, a broad enough definition of robbery to cover its different forms. Such as, defining street robbery and tax fraud as equivalent crimes; they both obtain wealth illegally and have a mental health impact on the victims, even if one is less immediate and obvious than the other.
Secondly, a likelihood that criminals will self-identify in the demographic that has already reached its target quota and where the police have stopped looking. For example, when Barking & Dagenham has reached it’s 5% quota for this new definition of robbery, that’s where the tax fraudsters will register their addresses in order to escape scrutiny and prosecution. A new form of identity fraud, if you will.
I imagine that we all want a system that investigates and prosecutes every crime. So, why wouldn’t we be happy with a system that is fully automated? From the bot police working on surveillance data, to our connected gadgets and public monitoring devices. An AI making judgements based on the facts to within an 83.33% level of accuracy (ten out of twelve jurors) and a prison system of single modular cells with the ability to visit, and be visited, virtually. All very neat.
But we balk at the idea of no human involvement, even though we know the system is biased, slow to correct its mistakes and expensive. We want a ‘human in the loop’. What would we think about the victim deciding on the level of the crime’s severity via an app? Should we at least have a human appeals process? But what if, as in Wendy M. Grossman’s story, “ELIJAH”, the infallibility of the machines goes unquestioned, making the appeals process pointless?
And finally, implementing a fully automated system supposes that we have perfected our justice system so we can now code it and let the algorithms get on with the job. We can even build into the system the retraining of the AIs whenever a miscarriage of justice is identified and corrected. But don’t forget, given that a lot of AIs will always return the most probable answer rather than a ‘don’t know’, the nuance of the current human heavy system might be lost.
So, where does this take us? Is it possible to create an automated system that we are all happy with?
To explore this further, why not set stories in a near-future world where the AIs that make up a jury are deliberately trained on different data sets? When researching and writing “See Me”for the learning and education section of All Tomorrow’s Futures, I became interested in AIs that debate from a particular point of view. Their bias is explicit and transparent. Could we agree on twelve differing ideologies or different perspectives and form a jury from these, asking them to debate the issues and come to an agreed (83.33%) conclusion?
Another line of thinking is how society is becoming more and more de-personalised, by the averaging and categorisation of our data, despite what the tech corporates might have us think. As a result we have unwittingly become desensitised to the notion of an individual.
How about a world where there is total surveillance? Not Orwell’s Nineteen Eighty Four, but instead we replicate the small communities where everybody knows and understands everybody else’s business, resulting in an accepted level of tolerance. Artificial intelligence should give us the ability to localise decisions by making the data and the interaction with the justice system more accessible. In turn, this could create a judicial process where punishment is decided through online voting.
Imagine a scenario where humans and machines work together to achieve the most efficient and fair system. This is where things become nuanced and difficult to fictionalise. However, we should write about such a future, because imagining it will influence it.
In All Tomorrow’s Futures there are some interesting ideas about this, mainly on fragile human computer interfaces and on reducing the amount of human time spent on administration.
“Updated Intelligence” has bot lawyers which don’t have the most up-to-date legal definitions (but then do all human lawyers?). In Ira Nayman’s “The Program Never Lies” we have the cop who refuses to accept any other evidence about guilt or lack of it after a facial recognition returns a 93.7% likely match.
Sophie Sparkham has the human police in her story “All Born Machines” acting on AI orders, yet as KC Trevor Burke points out, the Nuremberg trials established that following orders is not a defence. Although I wonder if their lack of on-the-spot decision-making experience might be a defence. After all, if humans only take decisions at the point of criticality, how do they get experience, from making the smaller decisions, that then helps them quickly and correctly make the right choice about whether to follow AI orders on the bigger decisions?
Both experts, Jayen Parmar and Trevor Burke, point to probable administration savings on background tasks such as summarising video evidence for police questioning or for a generative AI such as ChatGPT to abridge the judgement for the jury to consider.
In the not-so-distant future we’re likely to have either specialist AIs or specialists who can prompt AIs for specific results. How about stories based on a system where AIs in different parts of the chain learn from each other, let’s say the evidence summarising AI improves because of the convictions that are found to be wrong.
Or how about a more anarchist leaning world where the human intervention only comes at the point of sentencing and the victim decides the punishment from a list of options, such as blocking access to the NHS for perpetrators of serious violence. It would be interesting to explore how making these sentencing decisions transparent to everyone would affect the sentence handed out, given its potential to influence any future punishments this victim might receive should they commit a crime, no matter how small.
Wherever we end up we have to get there and the transition will be vital. As Trevor Burke points out, we are good at accepting new technology into our justice system, for example fingerprints and DNA. Taking things slowly towards automation could gain the trust of the public and make sure the machines are reliable. In “All Born Machines”, Sophie Sparkham has a throwaway line that could be important, “Those were different times. Cowboy times, when AI was new to the game and everything was to play for.”
It seems likely, if it’s not already the case, that there is an inevitable period of time where the rules and regulations are not fit for purpose. How about setting stories in this wild west, where there are AI training cities that come with great financial advantages for the human population, but law and order is a gamble? Where tech-bros are the masked men, the regulators are the marshals and entrepreneurs are the saloon and store owners? That could be fun.
In conclusion, the future will happen in some shape or other so it’s worth remembering the opening sentence of All Tomorrow’s Futures: ‘“Decisions,” Aaron Sorkin once quoted, “are made by those who show up.”’
image [RDNE]