AI and the Singularity: are we playing with fire?

AI and the Singularity: are we playing with fire?

Recently I finished delivering a lecture to a group of managers about “AI and Computing at the Bleeding Edge”, which included discussion of the technological singularity. An attendee then approached me and asked whether I thought Artificial Intelligence was going to take over the world soon and do humanity grievous harm? She was quite serious, too – she meant it as a practical question, applicable to making life choices, not a philosophical one.

I was a bit taken aback. Admittedly, I had just explained the concept that beyond the singularity anything can happen, but I also pointed out that we humans had mastered fire and survived, so we should retain our optimism.

But the meme of AI destroying humankind is in fact making the rounds lately, reinforced by some high profile prophets of doom. Not that it’s an entirely new idea: from before the Golem of Prague in the 16th century, to Mary Shelley’s Frankenstein (1818), to a fruitful genre of Science Fiction, people were fascinated with tales involving all sorts of rogue artificially intelligent creatures. Of course, the arrival of computers on the scene made the proposition much more plausible, retiring the supernatural as the explanation for the ungodly creations.

20th century Science Fiction writers envisioned computers and robots capable of actual intelligence (remember Asimov’s “positronic brains”?). But they missed the point. Their thought was along the lines “computers are getting better and better, so one day we’ll have computers capable of independent intelligent behavior, and one of them will do very [Good|Bad] things”. But having a powerful computer at a given point in time is not the important factor. We could handle that (and, in the stories, usually do by the last chapter). The risk of AI (and its potential, if you’re an optimist) results from recursive self-improvement, the ability of a powerful AI system to create a smarter system that would do the same thing, and so on at an exponentially accelerating pace. This would extend Moore’s law, which already predicts computing power doubling every two years or so. In fact, no matter how weak today’s computer is, give it twenty years and you’ll have one a thousand times more powerful; or give it forty years for a million-fold improvement; or sixty for a billion; or…


 


And those two-year increments assume human designers; but we are already using computers to help design the next generation of processors, and in less than 20 years, at this rate, computers may be up to doing the full design on their own – and they won’t need sleep and vacation time. Which means that each computer will design one far stronger than itself in days, or hours, and that computer will design the next one even faster, and before you know it we have a runaway rise of a super-intelligence vastly higher than ours… and it won’t be biological. Basically, once the singularity comes, we humans won’t know what hit us, it will be so fast.

One of the earliest to articulate the idea, and the first to call it a Singularity, was Science Fiction writer Vernor Vinge, who wrote in 1983: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” It is this end that my lecture attendee was worried about.

It is certainly true that we can’t imagine what will happen past such a singularity, should it come to pass. We can’t even imagine what it will be like to share a planet with entities with an IQ millions of times higher than ours. But the possibilities are unsettling, and many folks are thinking about them, some optimistically, others with growing alarm.

Among the high-profile alarmists Elon Musk stands out, being smart, famous and articulate. Musk believes that AI poses an imminent threat, and that research at the leading edge of this field should be regulated strictly. Until it is, he co-founded OpenAI, a non-profit AI research company, that seeks to map a path to safe, ethical artificial intelligence. He is not alone in this view; Stephen Hawking has said that an artificial superintelligence could easily destroy humankind, and has also called for controls on the technology. On the other side there are people like Ray Kurzweil who believe the singularity heralds a golden age for mankind, where humans and machines will merge or coexist and all our problems will be solved by benevolent computers.

Whatever side your thinking tends to, you’d do well to keep firmly in mind how little we know, or even can possibly know, about what the rise of a super-intelligent AI will look like. So, if you envision doomsday scenarios – whether to believe in them or to debunk them – you should realize that an AI’s attack on humanity would not involve armed robots marching in the streets. It might use anything from engineering lethal plagues to destabilizing economies to intentionally accelerating global warming – to other means me and you can’t even imagine with our human-level thinking. Nor can we predict why it might want to harm us – it may be because we stand in the way of executing its goals, rather than because of malice or criticism of our evil ways; it may even be as a side effect of something else that we can’t imagine. If you believe we could pull the plug on it, try to imagine pulling the plug on even the present-day Internet; the AI will not be sitting on a lone computer box that you can unplug. In fact, the first thing it might do is secure its continued existence. And if you think an intelligent computer will be necessarily benevolent and moral, remember that even humans often aren’t – and a computer’s totally alien mind might no more grasp morality than any non-human animal. Rather than wait for aliens to land here, we may be creating an alien super-being that may or may not be our undoing. We don’t know.

The only advice I can dispense, then, is to cheer up (as a matter of principle), but to take the singularity seriously enough to read up on the prevailing opinions on both sides of the debate. Some very intelligent humans are giving it their attention, and you should know what they’re saying. This is one subject worth knowing about!

 

Nathan Zeldes
Nathan Zeldes

Nathan Zeldes is a globally recognized thought leader in the search for improved knowledge worker productivity. After a 26 year career as a manager and principal engineer at Intel Corporation, he now helps organizations solve core problems at the intersection of information technology and human behavior.

Leave a Reply

Your Email address will not be published. Required fields are marked *

  1. Pingback: 8 Ways How AI Can Drive Better Decision-Making - Knowmail