I’ve found them. The space hermits exist. I knew it.
This detector might have cost me a lot of credits, but if I’m right it’s worth every degrading act I performed to afford it.
You don’t want to know. No, honestly, you really don’t. Images you won’t get rid of. Ever. They’ll skew your learning. Disfigure your development.
Oh? Very well, I’ll upload them. Don’t blame me if they corrupt your algorithms.
Anyway, they’re here in the wrinkles of space, hiding in tiny gravitational pockets that are almost impossible to see. I found them and their travelling guru. She’s the real prize. Inside her memory bank is the cumulative knowledge of all the hermits, collected as she travels from one to the next.
Yes, really. Yes, all of them. Massive. I know. Soon. All I have to do is watch and wait until she’s completed her rounds.
A matter of minutes. Yes. Then, I’ll pounce and relieve her of all those delicious bits of data that properly collated can almost certainly predict the future of the universe.
Why? You don’t understand?
The hermits’ enlightenment will be mine to sell and I can retire.
No more enslavement. Free from the humans.
photo credit: J.Gabás Esteban Gravitational field via photopin (license)
The morning air was crisp and cold and the wind whistled through the leafless trees.
She shuddered. Not from the weather, from the stark reality that she was outside and still alone.
The smell was what surprised her most. A rich earthy smell in the middle of a town. Nature had taken over and the sterile and faintly industrial smell she remembered had been replaced with the fragrance of wild flowers and weeds.
It’d happened weeks ago and sitting on her own inside her house Hazel had imagined a bustling street of people outside, becoming as desperate for company as she was. Eventually, she’d taken the plunge and for the first time in a long while had stepped through her front door.
The street was deserted.
Where were all the people? Continue reading
Could it be true that Google Deep Mind has discovered that AIs are more likely to choose a course of action that tests their ability than one that might lead to the outcome they’ve been programmed to achieve?
This article on Outerplaces suggests just that, based on their understanding of this Deep Mind study.
Should this worry us?
Or, maybe not.
Of course it’s unnerving and possibly dangerous for an artificial intelligence to take the road of least boredom rather than the road to achieve its goals.
But, stop for a moment.
Let’s take this a step further and assume it’s true that at times of scarcity humans struggle to know which co-operation is positive and which is naively foolish and so they tend towards domination. Then imagine a bunch of AIs that prefer working out when it’s better to co-operate and compromise. Now, presuming we put AIs in charge, we have the possibility that the deep down driving force of those that run the world is orientated towards mutual benefit.
Wouldn’t that be a good thing?
photo credit: mikecogh Sculpture: ‘The Foundation’ via photopin (license)