Is The Singularity No Threat? January 22, 2011Posted by Metabiological in Synthetic Intelligence, Transhumanism.
Tags: artificial intelligence, Singularity, transhumanism
Over at IEET Kyle Munkittrick has just explained why we have nothing to fear from the rise of machine intelligence. His post is a response to a short post by Michael Anissimov in which he reiterates a position that he’s held for awhile: that the Singularity is the greatest threat humanity is likely to face in the coming century.
Now I’m not entirely sure where I stand on the issue of the Singularity’s threat. I certainly recognize that the development of new technologies always brings with it risks (see coal power) and there is little doubt that the emergence of greater than human intelligence will constitute a major risk. That being said I have a hard time not laughing at some of the more apocalyptic visions I’ve seen and feel are strong urge to punch anyone who brings up Skynet as anything other than a joke. But Kyle’s naivite, I can think of no other word for it, on the potential threat is nothing short of jaw-dropping.
The major problem with his argument unfortunately happens to be its central thesis; that even if a synthetic intelligence were to arise it would be unable to interact with the physical world and therefore poses no threat. Even assuming that this scenario is likely, personally I think otherwise, his suggestion that an intelligence confined to a computer wouldn’t be able to affect is downright ludicrous. He even mentions as an example an SI causing havoc on our communication networks and then brushes it off as if it were nothing. One would have hoped that a person who writes blogs on the internet for a living would have a little more respect for the way that communications technology has become the bedrock of our society and economy.
In fact, let’s do a little though experiment. Let’s take an industry, agriculture for example, that is essential to the continued prosperity of humanity and heavily reliant on computer technology. Nowadays most food is produced far away from its point of consumption. Whether this is good or bad is a subject for another time because the fact is most of us do not subsist on food grown in our local region. To maintain the elaborate system that ensures your food gets from its farm thousands of miles away, across continents and oceans, takes a large and powerful infrastructure that today is heavily reliant on telecommunications technology. Now imagine that something, say an SI, were able to disrupt that system? How long to you think it would take for cities to turn into battlegrounds? It wouldn’t even need to be a large disruption. Most supermarkets don’t plan to carry stocks for large periods of time and an event like what I’m describing could send people into a buyer’s panic.
Want a more relevant example of a computer wreaking havoc in the real world? How about the flash-crash in the stock market last year? Wall Street algorithms caused a seven hundred point drop in the Dow Jones in a matter of minutes. They weren’t malicious (just doing what they were programmed to do) and they sure as hell couldn’t interact with the physical world yet they still managed to send people into a panic if only for a few moments. As we give more and more control to machines who can really predict the next crash won’t be worse.
Now I realize that what I’m saying sounds somewhat apocalyptic and I don’t think that such scenarios are necessarily likely. As I said I’m not sure where exactly I stand on the issue of the Singularity’s threat but one thing I do is acknowledge that there is a threat. To brush off the danger as Kyle and others are doing is akin to a person walking backwards towards a cliff and saying “Everything looks good from here.”