Automation: a life of luxury and the death of democracy?

There’s been a fair amount of press coverage lately on the potential for artificial intelligence and robots to take our jobs and how a Universal Basic Income could be part of the solution. Something the Silicon Valley tech-giants are putting their shoulders behind.

Some say that’s a good thing, while others disagree.

The European Parliament’s legal affairs committee report on Civil Law Rules for Robotics “takes the view that in the light of the possible effects on the labour market of robotics and AI a general basic income should be seriously considered, and invites all Member States to do so.”

As I’ve said before, I’m a big fan of Universal Basic Income for all sorts of reasons. Not least because it frees us up to live the life we want to and, as far as I can tell, it’s the most credible way to have a capitalist society that allows people to opt-out if they want to.

However, it was the link between major corporations, automation and democracy that struck me most at a gathering of London Futurists where Nick Srnicek and Alex Williams talked about their book, Inventing the Future: Postcapitalism and a World Without Work.

The argument from the audience that captured my attention went something like this…

With full automation we don’t have to work, but stuff can still be produced for people to buy and economies can still grow.

There’s only a handful of companies that can realise full automation e.g. Google, Amazon, Facebook.

Universal Basic Income is possible in an automated and thriving economy.

And now for the scary bit… the few mega-companies that are generating the profits and controlling the economy will have the ultimate say in how the country runs. It’ll be their shareholders that hold the power. Democracy dies, sold off for a life of doing as you please.

It certainly made me stop and think.

I haven’t changed my mind, but I have developed a little more caution.


photo credit: WanderingtheWorld (www.ChrisFord.com) ‘Bonfire’, United States, New York, The Hamptons via photopin (license)

Deliver Me from Darkness

Where was that bloody delivery drone?

He’d been waiting for three hours, from the moment he’d woken up.

How many times would he have to stay at home on the promise that his new eyes would be arriving that day?

Okay, so he’d not chosen guaranteed next day delivery but at the time he’d ordered them his eyes still had a good four weeks left in them. And, yes he’d been a bit casual about making sure he was there to sign for them, but the more critical it was getting the less the company seemed to want to help.

They insisted a drone had been at his door every day, but he’d been there most days. It was a load of rubbish. They just didn’t care.

And now they couldn’t guarantee delivery. A knock-on effect of the Christmas rush, apparently.

The light faded a little. His eyes were on their last legs, so to speak.

If he didn’t get his new ones soon his vision would cease and no matter how many replacements they delivered he wouldn’t be able to see to install them. Continue reading

Will the machine learning community protest?

Following on from my recent blogs about machine learning, here’s a bit of good news.

Well, probably good news.

Scientists and researchers at Google and Toyota are trying to do something about bias in machine learning by devising a test to detect it.

The problem of course is that algorithms are deliberately designed to develop themselves and they become complex and opaque to anyone trying to understand them. This test will spot bias by looking at the data going in and the decisions coming out, rather than trying to figure out how the black box of the algorithm is actually working.

This has to be applauded so long as the people analysing and testing the decisions aren’t biased themselves; there’s an obvious danger that the very people unconsciously introducing bias into the algorithm also introduce the same bias into the test – a futuristic version of Groupthink.

In a recent article in the Guardian newspaper, Alan Winfield, professor of robot ethics at the University of the West of England, said: “Imagine there’s a court case for one of these decisions. A court would have to hear from an expert witness explaining why the program made the decision it did.”

Alan, who was one of the scientists I collaborated with on Science and Science Fiction: Versions of the Future, acknowledges in the article that “an absolute requirement for transparency is likely to prompt ‘howls of protest’ from the deep learning community. ‘It’s too bad,’ he said.”

I’m not a machine learning expert so a lot of the paper that sets out this test is beyond my understanding, but I couldn’t see how the bias that already exists in our society wouldn’t be incorporated into the test.

Take a look for yourself at the Equality of Opportunity in Supervised Learning.


photo credit: ING Group The Next Rembrandt via photopin (license)