Sunday, April 19, 2009

Paths to Advanced AI - Engineered or Brute-Force Brain Simulation?


This is an interesting site, a few years old, but Matt Bamberger makes some good comments on the topic of advanced AI, and the various paths thereto.

These are excerpts from this page:
"We will soon develop human-equivalent AI, either by brute-force simulation of the human brain, or by a more traditional engineered approach.

By human-equivalent AI, I mean an AI with cognitive abilities at least equal to those of a human being. It isn't necessary for an AI to be exactly (or even moderately) like a human being. I agree with Eliezer Yudkowsky and others who have argued that a human-like AI would be profoundly dangerous.

Human-like AIs are dangerous for two reasons. Firstly, they will tend to exhibit dangerous human traits such as selfishness and fear as well as benign ones. Secondly, the inner workings of a human-like AI will probably be relatively opaque, just as the workings of an actual human brain are opaque. This makes it much harder to monitor and evaluate an AI, both to prevent it from exhibiting malicious behavior and to detect any serious malfunctions such as the development of aberrant goals."



I like his wording here, "AI with cognitive abilities at least equal to those of a human being." That is what is critical to advanced AI, both to be useful, and to be safe. In other words, it doesn't have to be exactly like a human brain.

These are some excerpts from his page on Brain Simulation:
"There are two basic approaches to building an AI. The traditional approach is to write an artificial intelligence from scratch. A less sophisticated but perhaps more tractable approach is to simply simulate the workings of an actual human brain. For a variety of reasons that I'll detail below, I don't think this is the best approach, but I think it's an excellent fallback. If traditional approaches fail to deliver a working AI in a timely fashion (which they very well may), there's an excellent chance that brain simulation will be able to deliver a good enough AI within the next few decades."

I do not concur with Matt that brute-force human brain simulation is "an excellent fallback" if we can't figure out how to do engineered AI. However, his outlining of the complexities and risks of the brute-force approach is quite lucid.

I have said on numerous occasions that Kurzweil's estimate of 2045 for the Singularity is probably not a bad one, certainly much more reasonable than much nearer term estimates like the next few years. I tend to think of "The Singularity" simply as the point when we have an AI of roughly equivalent cognitive capability of a human brain. I have also said that this estimate reflects his appreciation of the difficulties of engineering the AI software, since really, the raw computing power necessary is probably already here, or very close on the horizon (and the architectural innovations like in the "Googling the Brain" video will make the hardware side of the equation even more tractable, perhaps).

However, I have perhaps misinterpreted Kurzweil to some degree. What he seems to be saying is that engineered AI might be too tough, and favors the "brute force AI," based on the exact copy of a human brain, described above. I am unsure to what degree he actually believes this is the only feasible approach, or if perhaps he is bringing some personal preferences to this, such as having our advanced AI be "spiritual", thinking of advanced AI as a "successor to humanity", things like that.

My perspective is that engineered AI, no matter how difficult it may be or how long it may take, is the only viable approach. I think of advanced AI as a partner to the intellectual pursuits of humanity, not a replacement, or successor, to humanity. This notion is supported by the entire history of technology - and though advanced AI will be special, it will still be a technology product - at least if it is synthetic rather than biological, which I suggest it will be, for many reasons.

However, this comment is interesting:
"It's hard to extend the core capabilities of a simulated brain. One of the advantages of the simulation approach is that it requires very little deep understanding of how human intelligence actually works. The downside of that is that it makes it very hard to improve the AI. Even fairly simple tasks like increasing the AI's memory or directly transferring knowledge or skills from one AI to another become very hard problems. Virtual brains will be super-fast, super-capable people, but they're unlikely to be genuinely superhuman."

This seems like an eminently reasonable comment. A brute-force AI may be far less capable of undertaking a runaway loop of ever increasing intelligence, which of course the Singularity is predicated on. It is therefore unclear to me why Kurzweil seems to favor this approach in the first place. With engineered AI, there are definitely clear paths to increasing the intelligence of that. My favored concepts are hyper-observancy, super-subtlety, and ultra-coordination, which could consume almost unlimited multiples of human intelligence in ways that provide tremendous value, while remaining entirely rational, predictable, and safe. However, there are undoubtedly many other ways to identify measures of "intelligence" that could be improved in clearly value-adding ways. The main point here is that the AI will almost certainly need to be engineered rather than brute-forced in order to identify, manage, and increase those.

This is the biggest challenge I have with Kurzweil's vision: he simply uses Vinge's definition of advanced AI intelligence as "being able to do anything a human can do," which seems to reinforce his notion that brute-force AI is best, I'm not entirely sure. However, there are many things a human can do, and more to the point, many parts of the human brain and its psychological components that there is no need for an advanced AI to have, and would make it exceedingly dangerous if it did. This contention of having to "bake in the dangerous" parts of the human brain as the only path to advanced AI is, I suggest, the main and probably only reason that the Singularity is "unknowable." An engineered AI that is purely rationally controlled, as an engineered AI would almost certainly be, renders the post-Singularity "knowable."

Another comment from Matt's site:
"Simulated people are by definition just like people. That means they're cranky and sneaky and prone to behaving badly. Those are bad properties in an AI."

In other words, using Freudian terminology, they would have an id, superego, and ego. If you don't like Freud, use animal passion, morality, and reason, or whatever parlance you like. These three components, the tripartite nature of the human mind, are universally acknowledged by all of the great thinkers of history, although parlance and emphasis varies widely. For example, in contrast to Freud, Plato argued that reason could, in principle, rule the passions. However, he acknowledged that this usually didn't occur in real life - hence the idealism of his view. The most dangerous of these, of course, is the id. An advanced AI with an id would be unpredictable, because our animal passions are often irrational. This would, I suggest, make it a poor product to introduce into the marketplace, and if it was, would probably result in the manufacturer getting their pants sued off the first time once of these devices knocked someone through the wall because they somehow "offended" the device, or whatever.

A brute-force AI, even if we or they could figure out how to "increase" their intelligence, could easily become more dangerous or unpredictable the more intelligent they became, a very unfavorable trend for a manufactured product.

A brute-force AI would also "bake in" parts of the human brain that are really only suitable for a biological organism. For instance, the sex drive. To bake that into a manufactured product seems not only dangerous, but actually cruel to the advanced AI itself. Simulated sex drive, cool - but you need engineered AI for that.

2 comments:

Anonymous said...

predictionboy.blogspot.com is very informative. The article is very professionally written. I enjoy reading predictionboy.blogspot.com every day.
online payday loan
canadian payday loans

IMperfect said...

Hey PB,

Interesting post. One point I have, which may already be clear to you, is that there should be a re-definition of terms to describe the result of those 2 approaches.
I think the Engineered AI approach should really be called/thought of as Artificial Simulated Intelligence, not a true Intelligence; while the brute force brain simulation may turn out to be a true Intelligence.
To explain, I think in the first case one is just creating a complex system where the intelligence is really just apparent and is, as you said, "knowable" even after the Singularity since it's just the result of whatever algorithms, however complex, it was programmed to have. Just like the Big Blue machine that beat Kasparov at Chess.
On the other hand in the brain sim approach, the Intelligence would come out as an emergent feature where the system is not instructed to do anything specific (just like us, or at least that's what we think :)), but it just starts doing whatever it wants (and therefore has the inherent dangers you pointed out). Although in this case, I'm not sure yet what would be the initial trigger...

Also, if the first one somehow becomes aware of the possibility of building a brain - or merge with one ;) , then it could just give itself the second type of intelligence...