Could it be true that Google Deep Mind has discovered that AIs are more likely to choose a course of action that tests their ability than one that might lead to the outcome they’ve been programmed to achieve?
Should this worry us?
Or, maybe not.
Of course it’s unnerving and possibly dangerous for an artificial intelligence to take the road of least boredom rather than the road to achieve its goals.
But, stop for a moment.
Let’s take this a step further and assume it’s true that at times of scarcity humans struggle to know which co-operation is positive and which is naively foolish and so they tend towards domination. Then imagine a bunch of AIs that prefer working out when it’s better to co-operate and compromise. Now, presuming we put AIs in charge, we have the possibility that the deep down driving force of those that run the world is orientated towards mutual benefit.
Wouldn’t that be a good thing?