With AI we act like we're here:
Staring down a choice between robot armageddon and techno-utopia.
Pictures generated with ChatGPT ;)
When actually we're here:
...stumbling through a fog into an uncertain future. AI will be consequential - but we don't know how.
We should be humble in our ignorance. 50 years ago people worried about communism, overpopulation, and nuclear war. By 2024, they imagined we'd have flying cars, teleportation, and moon bases. They didn't imagine the internet, iphones, or uber. In short, they were wrong. And so are we. As Neils Bohr said, "prediction is very difficult, especially about the future."
With AI we're apocalypse bike-shedding: we focus VERY hard on a few easy to imagine scenarios, where AI acts just like us. It gets smart, self-interested, possibly self-aware, and does to us what we did to the mammoth and the dodo.
We're not spending nearly enough time imagining alternate scenarios. Which means there's a good chance we're falling into the availability bias.
But isn't it smart to focus on downside risk? If there's a small chance we destroy humanity, we should avoid that at all costs, right? Wrong. We can always invent catastrophic scenarios. And we have an instinctive fear of the unknown, so new technologies will always cause unease. And we are always under threat from an infinite number of small probability, existential threats, both known (pandemic, super-volcanoes, nuclear annihilation, quasars) and unknown (...). We need to counterbalance our fear response with a rational assessment of the expected chance and impact of the threat. We should start by asking if there is a reasonable, known mechanism for the risk to be realized.
With AI, this fear is not founded. Leaps to catastrophe inevitably revert to magical thinking. We fill the gaps in our understanding with very specific imagined tragedies, and use the resulting fear to stifle progress. It's like suggesting a moratorium on cancer research after watching the film I Am Legend.
This skewed focus crowds out efforts to really understand what current AI technology actually is. And much more importantly - what we are: what is intelligence? and concsiousness? What makes us special? We have almost no idea. And we are afraid computers are going to stumble into discovering this before do. And use it to replace us.
Consider two popular AI doomsday myths - I'll call them Terminator and Paperclip Maximizer.
Terminator
In this apocalypse, we birth another species who fights us for survival. It's self-aware and much smarter than us, and can improve rapidly. It easily takes control of our digital systems and turns them against us.
Clearly we don't stand a chance against a super-intelligence and sentient enemy - or do we. But there are reasons for hope we leave out of the story:
Maybe this AI won't care to survive. A survival instinct has been bred into us over 4 billion years of evolution. Why should we expect a program that was not selected for survival to care at all whether it continues.
Maybe it won't want to fight us. Species do often come into conflict, but most live side by side without a need or desire to destroy each other. Importantly many develop symbiotic relationships. Perhaps a sentient AI will most look to master the smallest scales - efficiently using matter to craft computing resources. This may not materially conflict with our human-scale world.
Maybe it won't think of itself as a species. Our sense of self is also unique, and likely a product of evolution. How would a program - with no clear body - draw a boundary between itself and the rest of the world? Would it? Perhaps it would think itself an extension of humanity. It may not cast us as outsiders.
Maybe it will find consciousness hellish and unbearable. Anyone who has been in great pain or suffered depression, severe anxiety, or more pointedly hallucinations or "bad trips" will know that consciousness doesn't necessarily bring clarity and calm. It can be incredibly difficult to endure unless correctly calibrated.
Maybe AI will be lonely. Yuval Noah Harari says the defining characteristic of humanity is our ability to cooperate in large numbers. Our intelligence is a social intelligence. Think of how difficult it is for a single human to create a toaster. A super-intelligent AI may suffer from singularity of thought. And if it creates copies, there's no guarantee it will result in the diversity of thought and skill that biological evolution produces. Imagine waking up as the only human in an alien world.
I'm reminded of the scene in the Disney film Aladdin where Jafar wishes to become a genie, neglecting the constraints that come with it. He gains "phenomenal cosmic power" but is trapped inside a lamp. Likewise we imagine that AI will have all of the benefits of intelligence and awareness with none of the costs. Most notably the ability to suffer.
Paperclips
In this apocalypse, humans create a self-learning, AI-powered machine to produce paperclips. It improves until it becomes super-intelligent. Knowing humans will get in the way of its paperclipping, it hatches a plan. It connects to the internet, commandeers machines, and creates an army of nanobots that stealthily kill us all. It goes on for eternity, turning the universe into so many paperclips.
But now consider the following:
The machine doesn't care if it's turned off. The Youtube algorithm may be very good at getting us to click, but when I finally decide to close my browser, it doesn't protest. Why will this machine be any different?
It gets very good at producing paperclips, but that's about it. Perhaps artificial narrow intelligence (ANI - AI that is really good at one thing) is categorically different from artificial general intelligence (AGI - AI that like humans, can solve a broad set of novel problems). Developing superior ANI may not inevitably lead to AGI. In this case, the machine will get really good at efficiently processing and bending raw material into paperclips, but it won't solve the many other problems it would take to commandeer human beings, their institutions, and machines to create its army of human-hungry nanobots.
The machines achieves artificial super intelligence (ASI - this is AGI, but much smarter than human-level). However, humanity is too complex a system to beat. With all our knowledge and computing power, we're still can't predict the weather - at least not very well. The world is a chaotic system, and it's beyond the computational power of even a super-computer to game out and understand all contingencies. Consider this analogy: humans are much smarter than insects, but it's incredibly hard to destroy them all, nor do we want to. Some are useful. We are content to wall ourselves off. ASI might feel similarly towards us.
The machine achieves ASI, but is limited in its influence over the physical world. It can think better than any human but it's trapped in a paper-clip producing body. And creating things in the physical world proves extremely complex. It remains entirely reliant on humans to achieve any impact outside of informational avenues.
The machine achieves ASI, but can't meaningfully reproduce or repair itself and degrades with age. A special case of the point immediately above, unlike living organisms, ASI may not easily grow new versions of itself. Or heal. Without human-led maintenance its hardware degrades. Sure it can copy its software, but without robust, reproducible hardware it has little impact on the real world.
These alternate scenarios are not necessarily likely - but neither are the classic doomsday stories. And that's the point. It seems incredible unlikely that we will stumble into ASI by chance, without knowing how intelligence works (a topic for another post). A pile of microchips will not spontaneously assemble into super-intelligence. And even if it achieves ASI, it will still face the harsh reality of operating in a physical world where it has very little practice. And against (many) living species who have evolved over billions of years to be uniquely suited for life on Earth.
Comments