MORE SINGULARITY TALK….I’ve been reading political books lately, but before the weekend is over I think I’ll backtrack and have a bit more fun with Ray Kurzweil’s The Singularity Is Near. As you may recall, Kurzweil’s basic thesis is that within a few decades we will (a) develop super intelligent machines and (b) this will produce an inflection in human development, and as a result massive intelligence will shortly thereafter spread throughout the entire universe (“shortly” compared to the age of life on earth, that is).
Is this really what will happen? Maybe. If I can offer a crude analogy, however, this seems sort of like predicting what will happen if you throw a baseball toward a box of ping pong balls a few hundred feet away. Even if the wind is swirling, you can make a pretty reasonable prediction that the ball is going to hit the box, but it’s virtually impossible to tell what happens after that. Maybe they aren’t ping pong balls after all, but hard boiled eggs. Or blobs of plastic explosive. And even if they are ping pong balls, we still don’t know enough about them to predict how they’re going to scatter once they’re hit.
That’s how I feel about the Singularity. I think the evidence is pretty strong that intelligent machines are in our future, and Kurzweil marshals that evidence pretty well. But is he right about what takes place after that? Do we really know what’s going to happen to the box of ping pong balls? This is obviously the domain of dorm room bull sessions, but with that caveat, there’s another possibility for our future development based on something Kurzweil himself suggests in his book.
In a section discussing how the brain works, he casts some doubt on whether human beings even exercise free will in the first place:
Interestingly, we are able to predict or anticipate our own decisions. Benjamin Libet at the University of California at Davis shows that neural activity to initiate an action actually occurs about a third of a second before the brain has made the decision to take the action. The implication, according to Libet, is that the decision is really an illusion, that “consciousness is out of the loop.”
….A related experiment was conducted recently in which neurophysiologists electronically stimulated points in the brain to induce particular emotional feelings. The subjects immediately came up with a rationale for experiencing those emotions. It has been known for years that in patients whose left and right brains are no longer connected, one side of the brain (usually the more verbal left side) will create elaborate explanations (“confabulations”) for actions initiated by the other other side, as if the left side were the public-relations agent for the right side.
With this in mind, here’s another possibility for what happens after we create fantastically advanced computing capabilities that are thoroughly merged with human consciousness: we discover ? in a way that’s truly convincing ? that free will doesn’t exist. And so we give up. Within a few decades, the human race chooses to put itself out of existence because there’s really no point to its continued survival and our biological urge toward self preservation, honed over millennia by evolution, no longer controls our merged biological/machine selves.
(Needless to say, we don’t really have the right language to write about this. Since we’re positing a lack of free will in the first place, “chooses” in the sentence above should be read in the same way that “flowers choose to grow toward the sun,” or something like that.)
This would certainly explain why there don’t seem to be any other intelligent races in the universe. It’s not that we’re the first (Kurzweil’s guess), it’s that any species that evolves far enough to produce machine computation in the first place quickly produces computation advanced enough to reveal that free will is a sham. And that’s that.
Or maybe not. Who knows? But in any case, I suspect that it’s wrong to assume that our future super-intelligent selves will have the same motivations that we do, any more than a super intelligent monkey has the same motivations as an ordinary monkey. At least, that’s not how it turned out with monkeys, is it?