Charlie Stross has a post making the rounds among skeptics and curmudgeons about how the Singularity is not particularly near and may be impossible. I agree, but largely on the conclusion not necessarily on the particulars of Stross' argument.
It was nice of Stross to link to the 1993 article by SF author Vernor Vinge that first coined the term. Vinge looked ahead, as is proper for his profession, and he found his speculations on the human future were clouded by an event he imagined to be both world-altering and imminent. He rightly called this a "singularity" -- the point where the current rules no longer apply and further extrapolation is impossible. The future beyond the singularity is governed by principles so radically different that we cannot imagine it.
Nonetheless he imagined it anyway and described an apocalypse. This result is fairly common when people try to deal with potentially powerful historical processes. Malthus famously predicted much the same when he looked into the arithmetic of population growth. Likewise, those of us who grew up in the 70's and 80's can recall the widespread belief that once there were enough nuclear weapons to destroy the world multiple times over that it was just a matter of time before we did.
The event that Vinge cannot see beyond (or can only see as the end of the world) is the development of super-human artificial intelligence. AI is an old trope in SF. We've imagined a future full of wise-cracking robots or homicidal computers for decades and no one really gave them a second thought. We would always somehow defeat them with our human cleverness or some unique biological illogic that's always better than rationality.
The modern wrinkle is Moore's Law -- the observation that computer power is increasing exponentially. How can we continue to think we'll outsmart the robots if they are always getting smarter than we are?
Formally the argument goes like this: through heroic efforts or just by accident we develop super-human AI. That AI turns its efforts to improving both the algorithms that make it intelligent, and the hardware on which it runs. Any improvements in intelligence or performance result in a faster rate of improvement. Thus a runaway feedback loop produces super-super-human AI in short order and normal meat-based humans are toast.
One counter-argument is that Moore's Law will peter out. That's true eventually, of course, but I think we're a long way from the limits of how much computation can be wrung out of small amounts of matter. Others argue that hard AI is impossible. Either the brain must be made of meat for some reason, or that human attempts to understand the brain must necessarily fail.
I disagree. I believe that AI is possible; I don't think there's anything magical about the human brain that can't be figured out or converted to an algorithm. I do believe, however, that AI is hard. It isn't going to happen by accident, or just as a result of some threshold level of complexity. The Internet isn't just going to "wake up" one day and become conscious, like some fanatics fervently hope.
This will require a theory of mind. I believe we will eventually puzzle out the mystery of how neuronal interactions create mental phenomena, including awareness and consciousness. And it will be a singular event; people will understand the world and themselves very differently after that. On the other hand much will stay the same. People will have better insights into how their own minds work but that won't necessarily produce better behaviors. The problem is that because this development requires both research and insight, we cannot put a time estimate on when it might emerge. It could be a very long time to discover a theory of mind.
Once we understand how minds work it may still be a long time before we can put that knowledge to practical use. The path from science to engineering can be grueling and tortuous, especially if the subject is complex. We have a very good theory of life; we understand how DNA works, we have a large body of work on proteins and enzymatic action, we have great models of signaling inside and among cells, and we can manipulate all of these processes in the lab. And yet the grand results of biotechnology imagined decades ago are still decades away. Life processes aren't very amenable to engineering. Mental process may be just as complex if not more so.
But suppose they aren't. Suppose that engineering minds is relatively easy, or -- given enough time and research -- we've figured out how to do it. In that case do we get a runaway AI, a Frankenstein's Monster in silicon?
No. If we have a theory of mind, and if we have experience building artificial minds, then we'll know how to build them to omit or subsume anything that could be anti-social or dangerous. There's no reason to think that any of the features that come standard with natural human minds -- greed, pettiness or tribalism -- are required for the useful application of general intelligence. Even if an AI bent on domination did arise, we'd still have AI's on "our side" that would use their super-human intelligence to counter the threat.
Could we get super-human AI without a theory of mind? Possibly. The path in that case is to wait for technological development to reach the point where a human mind can be "digitized" by scanning the configuration of neurons and their interactions. We still wouldn't know how the brain works but we wouldn't have to -- we'd just be simulating it as a program, and if the computer runs faster than the equivalent neurons then you have a faster, and therefore super, human mind.
Except that because we don't know how it works, the only way the mind simulation can function is in a simulated world with a simulated body. That has to be a huge bottleneck. What you have is not really super-human AI but rather a sub-culture of normal humans living really fast. Sure, they can think faster than physical humans, but subjectively they are still living normal -- albeit virtual -- lives.
More importantly, there's very little that these uploaded fast-humans can do to improve themselves. They could work in the field of electronics to create faster computers, thus making themselves (and every other computer) faster, but that will still require them struggling through what for them will be years of virtual college. Without a theory of mind they cannot improve their own intellect. They are just a simulation of the off-the-shelf natural-selection model, and short of simulated evolution nothing can change that.
Put simply, I think SF writers can relax for a while.