Fear of AI wiping out the human species on purpose may be just our latest expression of homo sapiens hubris. But we come by it honestly.
From “2001: A Space Odyssey” in 1968 to “The Creator” in 2023, movies have shown us AI that escapes our control, eventually enslaving humans or targeting them for extinction. In fact, Oppenheimer director Christopher Nolan commented that, “Artificial intelligence researchers refer to the present moment as an ‘Oppenheimer moment,” in reference to the fear that nuclear fission from the first atomic tests would pulverize the entire planet. No surprise then that a recent Pew study found that 52% of Americans feel more concerned than excited about increased use of AI, a 14-point increase in less than a year.
But the scariest thing about AI isn’t that it will take over our jobs (it may eventually) or exterminate all humanity (we may yet do that ourselves). The scariest thing is that AI unlikely to care about humans much one way or the other, an ambivalence that will increase as AI grows increasingly powerful and independent.
To suggest that AI will expend resources to enslave or eliminate the human race is giving both it and us too much credit. It’s more likely to think of us like ants – present, maybe somewhat intelligent, annoying at times and generally inconsequential.
Ultimately, humans will have to live in a world designed by AI for its success, not ours. It will offer benefits to us and our systems in the near term, of course. But to think that its only long-term objective will be to enhance human performance is a little silly.
As Stephen Hawking noted in his first Reddit AMA, at the point AI can recursively improve itself without human help, we are likely to have machines whose intelligence exceeds ours by many orders of magnitude.
No more than we have intentionally tried to exterminate every ant on the planet will a super-smart AI care about going after the human species as a whole. But – like the ants – we may get smooshed as collateral damage.
As Hawking said, “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
We can try to design AI that has goals aligned with ours. It’s a worthy effort. But we can also try and figure out what an intelligence far beyond ours might want to achieve, and how we best fit within that future. Just in case