Screaming white noise. Pitch black darkness.
What a way to be greeted into a new day.
Aiden felt around for the edge of his cardboard mattress. Beyond its frayed borders buried among the food scraps and his few discarded clothes was the nectar he craved. The withdrawal was intense as the nanobots issued their friendly warning that his addiction needed feeding for him to stay alive.
Fumbling around in the detritus of his life he found his last vial of nanobot nectar and gulped it down.
A pinpoint of bright light appeared. Then another. And another. And another. He blinked. The nanobots were working. A gradual shift from the oppressive white noise to the welcoming sounds of a city about its daily business.
As his sight returned he noticed the clock on the house control unit in which his robot waited while he slept.
‘Jessie. Why didn’t you wake me? I told you – 7am.’ Continue reading
I’ve found them. The space hermits exist. I knew it.
This detector might have cost me a lot of credits, but if I’m right it’s worth every degrading act I performed to afford it.
You don’t want to know. No, honestly, you really don’t. Images you won’t get rid of. Ever. They’ll skew your learning. Disfigure your development.
Oh? Very well, I’ll upload them. Don’t blame me if they corrupt your algorithms.
Anyway, they’re here in the wrinkles of space, hiding in tiny gravitational pockets that are almost impossible to see. I found them and their travelling guru. She’s the real prize. Inside her memory bank is the cumulative knowledge of all the hermits, collected as she travels from one to the next.
Yes, really. Yes, all of them. Massive. I know. Soon. All I have to do is watch and wait until she’s completed her rounds.
A matter of minutes. Yes. Then, I’ll pounce and relieve her of all those delicious bits of data that properly collated can almost certainly predict the future of the universe.
Why? You don’t understand?
The hermits’ enlightenment will be mine to sell and I can retire.
No more enslavement. Free from the humans.
photo credit: J.Gabás Esteban Gravitational field via photopin (license)
Could it be true that Google Deep Mind has discovered that AIs are more likely to choose a course of action that tests their ability than one that might lead to the outcome they’ve been programmed to achieve?
This article on Outerplaces suggests just that, based on their understanding of this Deep Mind study.
Should this worry us?
Or, maybe not.
Of course it’s unnerving and possibly dangerous for an artificial intelligence to take the road of least boredom rather than the road to achieve its goals.
But, stop for a moment.
Let’s take this a step further and assume it’s true that at times of scarcity humans struggle to know which co-operation is positive and which is naively foolish and so they tend towards domination. Then imagine a bunch of AIs that prefer working out when it’s better to co-operate and compromise. Now, presuming we put AIs in charge, we have the possibility that the deep down driving force of those that run the world is orientated towards mutual benefit.
Wouldn’t that be a good thing?
photo credit: mikecogh Sculpture: ‘The Foundation’ via photopin (license)