"In one scenario, we puny humans are simply pushed aside as a relic of evolution. It is a law of evolution that fitter species arise to displace unfit species; and perhaps humans will be lost in the shuffle, eventually winding up in zoos where our robotic creations come to stare at us. Perhaps that is our destiny: to give birth to superrobots that treat us as an embarrassingly primitive footnote in their evolution. Perhaps that is our role in history, to give birth to our evolutionary successors. In this view, our role is to get out of their way. Douglas Hofstadter confided to me that this might be the natural order of things, but we should treat these superintelligent robots as we do our children, because that is what they are, in some sense. If we can care for our children, he said to me, then why can’t we also care about intelligent robots, which are also our children?
Hans Moravec contemplates how we may feel being left in the dust by our robots: “… life may seem pointless if we are fated to spend it staring stupidly at our ultraintelligent progeny as they try to describe their ever more spectacular discoveries in baby talk that we can understand.” When we finally hit the fateful day when robots are smarter than us, not only will we no longer be the most intelligent being on earth, but our creations may make copies of themselves that are even smarter than they are. This army of self-replicating robots will then create endless future generations of robots, each one smarter than the previous one. Since robots can theoretically produce ever-smarter generations of robots in a very short period of time, eventually this process will explode exponentially, until they begin to devour the resources of the planet in their insatiable quest to become ever more intelligent.
In one scenario, this ravenous appetite for ever-increasing intelligence will eventually ravage the resources of the entire planet, so the entire earth becomes a computer. Some envision these superintelligent robots then shooting out into space to continue their quest for more intelligence, until they reach other planets, stars, and galaxies in order to convert them into computers. But since the planets, stars, and galaxies are so incredibly far away, perhaps the computer may alter the laws of physics so its ravenous appetite can race faster than the speed of light to consume whole star systems and galaxies. Some even believe it might consume the entire universe, so that the universe becomes intelligent. This is the “singularity.” The word originally came from the world of relativistic physics, my personal specialty, where a singularity represents a point of infinite gravity, from which nothing can escape, such as a black hole. Because light itself cannot escape, it is a horizon beyond which we cannot see. The idea of an AI singularity was first mentioned in 1958, in a conversation between two mathematicians, Stanislaw Ulam (who made the key breakthrough in the design of the hydrogen bomb) and John von Neumann. Ulam wrote, “One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the human race beyond which human affairs, as we know them, could not continue.” Versions of the idea have been kicking around for decades. But it was then amplified and popularized by science fiction writer and mathematician Vernor Vinge in his novels and essays.
But this leaves the crucial question unanswered: When will the singularity take place? Within our lifetimes? Perhaps in the next century? Or never? We recall that the participants at the 2009 Asilomar conference put the date at any time between 20 to 1,000 years into the future. One man who has become the spokesperson for the singularity is inventor and bestselling author Ray Kurzweil, who has a penchant for making predictions based on the exponential growth of technology. Kurzweil once told me that when he gazes at the distant stars at night, perhaps one should be able to see some cosmic evidence of the singularity happening in some distant galaxy. With the ability to devour or rearrange whole star systems, there should be some footprint left behind by this rapidly expanding singularity. (His detractors say that he is whipping up a near-religious fervor around the singularity. However, his supporters say that he has an uncanny ability to correctly see into the future, judging by his track record.)
Kurzweil cut his teeth on the computer revolution by starting up companies in diverse fields involving pattern recognition, such as speech recognition technology, optical character recognition, and electronic keyboard instruments. In 1999, he wrote a best seller, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, which predicted when robots will surpass us in intelligence. In 2005, he wrote The Singularity Is Near and elaborated on those predictions. The fateful day when computers surpass human intelligence will come in stages.
By 2019, he predicts, a $1,000 personal computer will have as much raw power as a human brain. Soon after that, computers will leave us in the dust. By 2029, a $1,000 personal computer will be 1,000 times more powerful than a human brain. By 2045, a $1,000 computer will be a billion times more intelligent than every human combined. Even small computers will surpass the ability of the entire human race.
After 2045, computers become so advanced that they make copies of themselves that are ever
increasing in intelligence, creating a runaway singularity. To satisfy their never-ending, ravenous
appetite for computer power, they will begin to devour the earth, asteroids, planets, and stars, and even affect the cosmological history of the universe itself.
I had the chance to visit Kurzweil in his office outside Boston. Walking through the corridor, you see the awards and honors he has received, as well as some of the musical instruments he has designed, which are used by top musicians, such as Stevie Wonder. He explained to me that there was a turning point in his life. It came when he was unexpectedly diagnosed with type II diabetes when he was thirty-five. Suddenly, he was faced with the grim reality that he would not live long enough to see his predictions come true. His body, after years of neglect, had aged beyond his years. Rattled by this diagnosis, he now attacked the problem of personal health with the same enthusiasm and energy he used for the computer revolution. (Today, he consumes more than 100 pills a day and has written books on the revolution in longevity. He expects that the revolution in microscopic robots will be able to clean out and repair the human body so that it can live forever. His philosophy is that he would like to live long enough to see the medical breakthroughs that can prolong our life spans indefinitely. In other words, he wants to live long enough to live forever.)
Recently, he embarked on an ambitious plan to launch the Singularity University, based in the NASA Ames laboratory in the Bay Area, which trains a cadre of scientists to prepare for the coming singularity. There are many variations and combinations of these various themes. Kurzweil himself believes, “It’s not going to be an invasion of intelligent machines coming over the horizon. We’re going to merge with this technology …. We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.”
And it's already started.
Meet IBM's Watson.... http://www-03.ibm.com/innovation/us/watson/what-is-watson/the-future-of-watson.html