“Should we risk loss of control of our civilization?”
This question was posed in the open letter from the Future of Life Institute, an NGO, signed last month by tech leaders including Elon Musk. It was a letter calling for a six-month pause on AI development due to concerns about the speedy advance of the technology.
What is interesting about the question is that it assumes that we currently have control. Do we?
Who’s steering this thing?
Of course, as individuals, we have barely any control at all at the civilizational level. Even collectively, that control is partial and often those partial levels of control are contradictory, acting against each other.
This leads to the second unspoken question hidden in the open letter’s original one: would AI not do a better job?
AI: the perfect answer
If there is an omniscient creator, then there is something beautiful in the idea that this creator decided to solve the problem of its errant, planet-destroying children by showing them a path to designing their own obsolescence. Quite brilliant, really.
And if AI would do a better job, then we can all relax. A better world awaits! As my colleague brilliantly put it the other day – “We could all become their pets” – and let’s face it, some pets have a great life.
Is ChatGPT really the end of work, and what does that mean?