An article in Wired magazine – Don’t Make AI Artificially Stupid In the Name of Transparancy – suggests solutions to the governance of machine learning.
For some reason, it reminded me of a story I read some years ago. In 1968 a three year experiment of not changing the clocks from BST resulted in fewer road traffic deaths; data suggested more people were injured in the darker mornings, but fewer people were injured in the lighter afternoons.
Although I can’t validate it, I was told that the reason the scheme was scrapped was because, despite there being fewer deaths overall, the media focussed on the ones that did happen as a result of the experiment.
It seems to me that we have a similar problem with artificial intelligence – we’re in danger of focussing on the errors not the benefits. Desperately trying to understand what went wrong and limiting its potential as a result. What the Wired article attempts to do is find solutions that mean we can make the most of AI rather than dumbing it down so we can understand it, and hence control it.
One of the major challenges for the media will be to give a balanced view, rather than taking the easy route of selling bad news. And, it’s also a challenge for us science fiction writers to portray nuanced futures that have both hints of hope and words of warning.