Thursday, August 7, 2008

Why the Fear of Rogue or Dangerous Artificial Intelligence is Fundamentally Irrational

We humans are driven by primarily irrational control mechanisms to want more than it's fair share (the id), and often unrealistic expectations of oneself and others (the superego). The weakest part of the human psyche is the rational mind, the ego.

The main thing about advanced AI that scares the willies out of a great many people is that simply by achieving an ostensibly human level of intelligence, that this will automatically result in a structure exactly like our evolutionary-derived, biological brains. Really, that's what we're talking about here, that's why we're afraid. Another way of putting this is that many people make the intuitive (yet deeply flawed) assumption that human intelligence is the only kind of advanced intelligence. This is a near-automatic presumption that receives very little subsequent examination. This was the real genius of Freud, that by careful examination of a very large number of patients and others, he was able to discern this tripartite nature correctly, in particular the unconscious nature of the main drivers of human behavior. It is worth pointing out that this tripartite nature of the human mind has been recognized for millenia, and was described by Plato in a very similar way, and Marx as well. The main difference is, Plato and Marx described this tripartite nature in an idealistic, hopeful way, not an empirically-based (ie, scientific) way. Freud was the first to approach the macro-structure of the human mind in a scientific way. However, he was preceded in some ways by older thinkers such as David Hume, who echoed many of Freud's later analyses, especially the dominating (id-driven) nature of human behavior. Freud's biggest insight was the unconscious nature of these dominating drivers, which makes them unfortunately very difficult for most people to therefore understand even their own minds.

The relevant thing to note about all this is that these will NOT be the motivating drivers for AI programming or behavior, no matter how "intelligent". For one thing, to replicate unconscious motivations in a synthetic, man-made product would not only be foolish, I'm not sure it's even possible. Computer scientists don't understand Freud's concepts any better than anyone else for the most part, so the things that would make AI dangerous aren't even on their radar, they're not going to design that kind of thing. Even if they did understand these concepts, to make an irrationally controlled droid or advanced AI would be tended against by liability, ethical, and other concerns that mold the modern consumer marketplace as much as, if not more, than the technology itself.

I'm not passionate about endangering the sanctity of the 'unknowability' of the Singularity, but really, I'm not so sure there's much unknowable about it, as it will really come to pass. This is because the main thing, probably the only thing, making the Singularity unknowable is this confusion about what AI will really be like. When you make the simple assumption that AI will be just like human intelligence, sure the future is unknowable (or rather, I would suggest quite knowable - knowably, extremely dangerous).

But, we've seen that replicating irrational control in a synthetic product is not only foolish and unnecessary, it is no where suggested that this is an even remotely likely trend from any technology evidence available today - software or hardware. Not one iota of evidence supports this contention that AI will be irrational like us.

Therefore, to persist in this guttural fear is just another facet of human irrationality, and is not based on the actual nature of the technological systems that will eventually be designed and built.

I predict with great confidence that this irrational fear will vanish once hyperintelligent systems actually become available and business and consumer users have experience with them. Unfortunately, I believe any system that can be remotely described as hyperintelligent is a ways off. In all likelihood, mobile robots for the home will probably lead the way, because multi-tasking robots in the highly unstructured environment of a home seems the most demanding of the various aspects of intelligence such as sensory awareness and responsiveness, dexterity, vocal communication rather than programming, etc.

The most sophisticated robots in general use in the home now are quite simple and perform one task only - the vacuum-cleaning Roomba, the Robomow Automatic Lawn Mower, and the IRobot Security Guard. The (quite slow) extension of these single-purpose robots into multi-tasking robots that can clear the table and wash the dishes, for example, represents a large increase in intelligence of these devices, and will be the first real step toward the hyperrealistically human droid that I eventually foresee. However, even a droid that can clear the table and do the dishes may still be light years away from anything resembling hyperintelligence, to be sure. However, it will be an important first step for moving the comfort level of consumers from their irrational fear of dangerous AI to the status of any other technology product - a useful device, manufactured product, designed and created for its value in human service.

Hyperrealistically human droids, whenever they arrive, by the time they arrive will have the exact same status.

No comments: