Sadly, I was unable to attend the recent Bristolcon – I was stranded part way there because of the flooding. I was due to be on a panel discussing how to make AI more socialist. Having put the thought into this, I decided to write a brief blog about the ideas I would have discussed.
I’m going to resist getting into any definitions of socialism or AI – both of which can be thought of in many different ways and approach this from a more generalist viewpoint.
We know that one of the main issues with AI that needs to be addressed is data. Whether that’s the data they are trained on or the data they are using to generate their responses. So, one way of making AI more equal would be to organise data trusts. These are where an organisation holds and trades your data on your behalf. These could be geographically or values-based. The main point being that you decide who benefits from your data and how. You can choose to earn money for yourself or your community and most importantly have someone you trust manage it on your behalf.
One way of ensuring truly representative data could be to create a global validated dataset that all AI must be trained on and tested against before being deployed. This is an idea I’ve heard from others (not my original idea), but it seems sensible. I realise it could stifle rapid innovation, but maybe that’s the point – to stop companies taking short cuts and releasing dangerous technology.
Another way of slowing things down would be to regulate, especially in an open-source environment. That way you limit its use for negative (or deadly) activity, while enabling a greater number of diverse developmental teams. This also has the advantage of stopping the established players from metaphorically pulling up the drawbridge and preventing the competition getting a foothold. One example of regulation that I’ve heard is to make it illegal to create AI that is able to rewrite itself to change its core purpose.
Well, you could form something akin to citizens’ assemblies to oversee the data, development and deployment. You could also legislate or regulate for a managed transition to the wider adoption of AI, especially where jobs are at risk.
Finally, how about recognising that not all data is of the same value. Putting a higher premium on ‘everyday people’ who make up the vast majority of the population, or even better recognising that marginalised communities are not represented well in existing data sets, would mean their data was worth more than others. Paying more for that data in order to redress the balance and the bias would be an ‘equalising’ act.
These are some of the thoughts that I was really hoping to explore further with the panel and the audience. Maybe I’ll get the chance in another forum.