A supplement to my recent article about the End of the World:
I was reading today about a potentially apocalyptic event in our future referred to as the Technological Singularity, and have come to the conclusion that if the Mayan deadline passes by uneventfully, this may be the most likely way humanity will eventually wipe itself out.
It’s no accident that the term “singularity” appears in other branches of science as well. The most well-known is the spacetime singularity which describes a point at which gravity approaches infinity, thereby causing a breakdown of the laws of modern physics (the study of black holes mentions this a lot). Here’s the Cliffnotes version: when we fall, the speed at which we approach the ground doubles every second. If it were possible to fall from an infinite height, our speed would also approach infinity given enough time.
In a similar vein, a “technological singularity” refers to a point in time when all of our knowledge and innovations occur at such speed that the potential (and consequences) become “infinitely unpredictable.” Specifically, we’re looking at the moment when man builds an ultra-intelligent machine that can surpass the intellect of its makers, at which point things spin out of control.
When originally hypothesized by statistician I. J. Good in 1965, he declared that this ultra-intelligent machine would be “the last invention that man need ever make,” because we will at that point have rendered ourselves obsolete.
As you can imagine, this is the kind of theory that makes for some incredibly dramatic science-fiction. In the Terminator movies, the Singularity is reached on July 25th, 2004, and is promptly followed by a massive nuclear launch that nearly wipes out the entire human race. In the Matrix trilogy, the robots subjugate humanity at the turn of the 22nd century. Other writers have even worked out methods of prevention: in William Gibson’s Neuromancer, artificial intelligences are regulated by “Turing Police,” to make sure they never become smarter than us. And leaping even further beyond that, in Dan Simmons’ Hyperion, a group of artifical intelligences debate whether to design a new technology that will render themselves obsolete, suggesting that even the AIs may have to face their own subsequent singularity.
Whether or not it’s possible to build this ultra-intelligent game-ender seems obvious to me, which is why this theory is so troubling. We’re already building machines that are better than us at specific tasks, and it’s only a matter of time before we build a machine that is better than us at everything. (For example, although the chess computer Deep Blue just barely plays the game better than the best players in the world, it totally trounces its programmers. So it is definitely possible to build an entity that exhibits more intelligence than its creators; the key difference is that, at the moment, this is only possible in very narrow applications like chess.)
Precisely when this world-changing event will occur is, of course, unknown, although the leading voices on this topic have published some opinions on the matter. The rather dramatic first paragraph from mathematician Vernor Vinge’s 1993 treatise reads:
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.
If this is the kind of analysis that intrigues you, check out the Singularity Institute for Artificial Intelligence, and this riveting article on friendly (and un-friendly) AI. 15 years to go, folks :)