Tuesday, August 28, 2007

What Artificial Intelligence (AI) Will Really Be Like

No matter how intelligent, advanced AI will be purely rationally controlled - and hence knowable, non-threatening, and eminently useful.



In coverage of artificial intelligence technologies whether in paper, movies, TV news, or the net, there is always this anxiety bordering on dread of the projected point when these artificial brains, presumably in a robot of some kind but not necessarily, reach and then eclipse the power of one of our own human brains. Up to that point, the coverage is excitedly optimistic. But at that point and after, it takes the tone of, we’d better be afraid, perhaps really afraid. Almost all of the coverage has either this tone of quiet dread, or remains optimistic, but with no real idea of how this superintelligent technology will actually be used in the future. The possibilities are endless, but it's almost like, they'll be the smart ones, we can't possibly imagine how they'll actually be used from our perspective here in the present.

Respectfully, I disagree entirely - these droids and the uses to which they will be put are quite knowable, because we humans will be manufacturing them within constraints, legal and otherwise, that we would recognize today.

The future will be different, but not entirely different in every single way. Corporations aren't going to turn against consumers with malevolent technology, and much of this thread will show that the droids won't get there by themselves.

Have confidence in our future selves, we are not going to lose control of our products in this way. It's funny to say that, because not only is there little risk of our products getting away from us, to get them just to their intended design behaviors will be an immense undertaking, making Vista look like DOS - or a simple "Hello, World" program. We won't have to monitor them to make sure they don't get out of hand, that's the movies. We will want them to help. In fact, some of these techs may positively require their significant help in order to be achieved - they could be too tough for humans alone. I say that to hit home the tech complexity, not to slam humans, I'm mostly referring to some of the most advanced droid techs covered later.

This general theme of things getting out of hand once AI tech gets to the human+ level is quite widespread. I just did a Google before I wrote this so that I wouldn't address something already past, but the scary or unknowable Singularity and related concepts are entrenched as common wisdom, I would venture to say. We need to address this, because these chains of perspective are deeply flawed in important ways. In fact, in just about every way, except for the prediction that computing power will some day get as powerful as a human's.

Taken together, these two perspectives represent the vast majority of opinion and coverage on this subject. These will be put to a well-deserved rest in this and following threads. Both these views are incorrect, and are therefore limiting in terms of their predictive value. Looking beyond the Singularity is kind of like saying, the sky is blue because God made it that way - and just waiting for the droids to turn on us isn’t very helpful either. These technologies will neither be spontaneously malevolent, nor are their applications beyond our ken from here in the present. We can't see everything, of course, but by carefully processing the future we can see a great deal more than is currently believed.

In regards to the potential malevolence of advanced AI systems, the thinking generally seems to be that once these devices get a little past 1 human intelligence multiple, they will look at us like the inferior life forms we are, and they will use that intelligence to enslave or kill us, as a short step to taking over the world.

Really what the proponents of that view are saying is that, because they will necessarily have to have “brains” organized very closely to our own to exhibit truly “intelligent” behavior, they will be subject to the same vagaries of intent as our own. But since they’ll be lots more smart, they win. In other words, we fear that they will be like us, because we have a history of using our intelligence aggressively, both between ourselves and with the planet's resources as a whole.

This represents the sharp horns of a most serious dilemma: advanced AI needs to be modeled more or less exactly like our own intelligence in order to behave as humans deem intelligent; however, we understand that our own intelligence is often selfish and unpredictable, so we feel inherently uneasy with replicating these control mechanisms in the intelligence of our advanced consumer products.

This approach is untenable not only because of the vast complexity of creating a droid that is driven as we are (as we will describe below). This will take decades longer than necessary, and the result will be a Pyrrhic victory for humans, because the control mechanisms for ourselves are elusive and ephemeral. To replicate these control mechanisms in a consumer product would be problematic, and perhaps ultimately a mixed bag of pros and cons that at the very least could suboptimize their market success. If I am correct in assuming from the limited evidence available that most advanced AI research dollars are currently going in this direction, then we could be spending billions and losing decades in an ultimately unproductive direction.

I predict that when advanced AI technology is finally consumer-ready it will embody a different approach. That is, it will be more a perception-management engine which is designed to respond to and interact with a given human (usually its owner) in such a way that to that person seem absolutely, realistically human. It does not matter if the droid truly feels emotion, for example, as long as it seems to a human that it does truly feel emotion. In other words, that the emotions are a high-enough fidelity simulation to be indistinguishable from an actual person to the perception of a real person .

This may seem a little vague or confusing, but not to worry - we have scenarios galore to illustrate what is meant by these concepts, in later threads.

It may be that current approaches seem in some ways quixotic in terms of pursuing an advanced AI tech that "really" thinks is because we don't really have a good handle on the business requirements, the detailed understanding of the applications we're designing these for. Things like the Turing test are good checkpoints, but hardly inform the detailed direction in which the research dollars should be putting to best use. Details are important, because when a enough details are accumulated, one can often step back and see a pattern, a cadence, that they take, which can in turn sometimes lead to shortcuts, optimizations, that produce the desired result in a shortened time with the important functionalities intact.

When a technology is too far away in time, it suffers the same effects that distant physical objects do in space, small, indistinct, fuzzy. Visionaries are those instruments, futurists provide the focus on these distant lands. If that land is going to be a real one, those visionaries need to have hard hats, with feet firmly on the ground, because much of the future is nuts and bolts, like today. Let's put on a hard hat to address this problem, this most central core of almost all interesting advanced future technologies.

Really, an architecture that seems exactly human is as good as one that is truly human, and if that architecture is more stable and predictable, it is better. If that architecture also makes it more straightforward to incorporate greater than human intelligence in ways that are valuable but nonthreatening, then that architecture could be considered superior.

Droids will eventually have everything they need to take over the world, save one – the motivation. The ambition, the will to power, the ability to feel contempt, hatred, or any other emotion, for that matter, good or bad. Without this, their other powers are not threatening at all. They will be able to simulate emotions, but not be controlled by them – a profoundly critical difference, of the first magnitude in importance. Without these emotions in the driver's seat, they will be about as likely to exhibit truly malevolent behavior as a PC today, seriously, no matter how smart they are. Much of this thread will be devoted to supporting this statement.

It's time to be fair to ourselves and acknowledge that the nature of emotions and how they affect our behavior is of monumental complexity, in absolute terms – just because we don’t think we’re thinking about our emotions does not mean they don’t require vast computational resources, both in our own brains and any man-made system. If for no other reason, their great complexity means that there is almost a nil probability of their spontaneous, unintended appearance in a droid or related consumer product of any kind. The main source of unintended behaviors today in computers are software bugs, and bugs create problems, occasionally even make your computer freeze up, but they don’t exhibit highly coherent, complex behaviors. That’s why it’s called a bug, usually localized in effect, very rarely if ever coherently working with another bug to produce a substantively different software functionality.

The nature of emotions and their control mechanisms are so complex that I doubt science understands them enough to provide a detailed description, much less the conceptual framework for replicating our emotion drivers in a manufactured product. Remember, scientific knowledge leads engineering knowledge, sometimes by very substantial timeframes; and if our scientific knowledge is still tackling the basic how do emotions work, how do they control us, then it will be reasonable to expect a very long wait for droids driven in this way.

On the other hand, with the architecture suggested in this blog, emotions are less critical instruments of behavioral control as they are displays for the benefit of its owner. What proportion of whatever id-like behavior is simulated will be in response to the cues from its owner - so the deepest mysteries of emotion and the id don't have to be really solved, just solved enough to make them successful consumer offerings.

Don't get me wrong, I'm not suggesting that this architecture will be easy - it will still take decades to realize. However, it will be realizable in a way that an emotion-driven droid will not, for the forseeable future.

From studying present and past technologies the lesson is abundantly clear: you don’t get something for nothing – complex and unintended behaviors do not appear spontaneously in a human built system, no matter how advanced the technology gets. Any desired system behavior requires work, usually a lot of work, by lots of smart people who have no reason to design a malevolent product, and every reason not to.

You don’t get something for nothing - that would fall into the “we should be so lucky” category. This kind of spontaneous creation of new and unexpected behaviors in technology is almost like believing in magic, and spontaneous creation of highly complex system characteristics are about as likely as real magic – they simply don’t happen, and are not likely to happen, even in the far future and with incredibly more advanced technologies. Not only will that fact remain intact, it could deepen considerably because of the much greater complexity of those systems. We seem to think that spontaneous, complex behaviors are more likely in a more complex system, but actually the reverse is true – they are even less likely, because the spontaneous behavior would have to coherently and significantly affect even more intricately interconnected systems in more and more improbable ways.

The malevolence concern is so extremely implausible that the onus is really on those who claim it could happen, and exactly how if so. If a solid, detailed, logically consistent case can not be made, then we must acknowledge that this is a fear unsupported by any real evidence, and wisely set it aside for the time being, until actual evidence supporting that belief does surface. Until then, it's magical behavior and therefore not useful for predicting the future.

In any case, I predict that the concerns about AI systems turning malignant will evaporate almost as soon as real products are introduced. Much of the anxiety is simply because we don’t have experience with those advanced AI products. If we look at the most formidable computing machines today, supercomputers, they are amazingly fast and powerful, but we don’t consider them likely to spontaneously develop motivations and take over the world – at least I haven’t seen serious concerns of that nature recently.

Far from being malevolent, the exact opposite will be true: advanced AI systems built in the future, even those with vast multiples of human intelligence, will by their very nature and architecture not be competitive with human intelligence, but amazingly complementary. The PC today has that role, and these future techs will feel complementary in the same sort of way, but orders of magnitude more sophisticated. These devices will provide immense value to us, into the far future.

To go further, we need to explore the nature of intelligence more clearly. In particular, let's see if the widely accepted belief that any advanced artificial intelligence must be architected just like ours holds water or not.

Intelligence is a very interesting topic.

I don’t believe this almost universal conception is correct, and completely disagree with Kurzweil and others that these systems will require an organization essentially identical to our own brain. In fact, it will be almost completely the opposite, and it turns out that the type of intelligence exhibited is not threatening and even quite complementary to humans, amazingly so in fact.

So, we have 3 intelligence subsystems, id, superego, and rational, to work with. What mix of the 3 does an optimal AI system require? For example, a droid that must interact smoothly and effectively with one or more people, while executing many other tasks as well. I will devote a thread to explaining in detail how and why, but for now accept that the optimal mixture of the 3 will be to make the rational engine as the true control center, with id-interpretive and id-simulation engines, which are used as necessary to convince humans of their naturalness, or humanness. Their emotions are purely for our consumption, they don't control the droid's behavior at all. The rational control center is orchestrating the simulated emotions, and will be completely in control at all times.

This is an important distinction from almost all efforts I hear of today in terms of ultimate objective: current approaches are seeking to have the "emotions" of these droids actually run things in the droid's behavior and actions. This is a far more complex, inherently unstable, and ultimately unnecessary holy grail to try to attain.

Think about it. These are consumer products, we are not intentionally creating a new, autonomous lifeform for its benefit. We are not God, we are consumers, and we want a creation that is as much tuned to what we want as any technology developed today. Remember, these people will be real people, think about what you would do if the products were available - what would you want? This blog will describe a fair attempt at answering this and many other questions, discarding the future's halo and just walking around in it like the place it is, our place now, but with a little more treadware on it. Probably one or two pretty bad things might happen in the next 100 years, but in the big scheme, probably not stop man's trajectory of progress. Lots, lots more knowledge, both scientific and engineering. At some point, the knowledge itself becomes difficult to dislodge, so it would take a pretty big calamity to come near stopping or even significantly slowing that juggernaut.

Despite our amazing knowledge gained, we must acknowledge that these are very tough problems. Emulating the human brain - the most complex object in the universe, that we know of - pretty big boots to fill. Let's attack this as consumers rather than semi-Gods and do it right, pragmatically - 100 years seems like a long time, but it is nothing. The eons stretch ahead, and for our civilization to weather those eons, we'll need to get sustainable very quickly, and these technologies should provide immensely powerful tools to help us get those equally big challenges under control.

This brain will be a synthetic creation (at least for quite a while), an immensely advanced version of a microprocessor computer system today. Our current systems are very challenging to make, but once made, there is no evidence that any computer circuitry has ever rearranged itself in spontaneous ways. To proactively address some concerns out there like, "well, the id-simulators might take over, then you're in trouble again" - trying to get that exciting malevolence back into the picture, but please refrain - there is absolutely no reason to believe that the probability of this occurring is anything but vanishgly small.

The potential for malevolence in our droids is kind of attractive in weird ways -sexy, dangerous. But there is not the slightest support for fears of this kind, so even though they seem a little less cool because they won't get medieval on you, once you set that preconception aside much progress can be made in divining the amazing value proposition these consumer products will provide.

The advanced AI/droid brain will be much advanced, but should be able to leverage the chip and software industries infrastructure of today fairly well. It will be like a super-PC, very different but alike in one important way: its degree of emotion in terms of the decisions it makes and actions it performs. Our current computer systems do not seem to be getting more emotional as a parameter to consider in their design and production.

Superego-type simulators may be needed as well in some form, also primarily for our benefit. Consider their superego to be the legal code that its owner is operating under, customized for locale. This is for its owner's interest almost entirely: to keep its owner and itself out of legal trouble, because being in jail means that owner's weal is way down. A droid convicted or implicated in any legal trouble could mean big problems for its manufacturer, which they will want to avoid as a high priority. The very best way in almost every case to avoid trouble of this kind is to prevent its occurrence in the first place, and most likely will be exclusively applied to social situations, when someone in addition to its owner is involved.

So, a rational control center runs the show, and interacts with id-interpretive and id-simulating learning engines as needed to seem human in character, with the droid's interactions bounded by the human legal code.

It may not be apparent, but this organization is a huge difference from humans, almost as different as it can get. Of central importance is that in the droid brain, the id-simulators aren’t driving the droid’s behavior – they are being used to understand its environment to some degree, especially interpreting the gestures and actions of human beings. It is also being used to simulate id-like responses appropriate to the situation. These behaviors will seem absolutely indistinguishable to us from “real” emotions at some point, but that’s because simulations that are sufficiently sophisticated seem to be real, they suspend our disbelief. That will be the point, to suspend our disbelief with high-fidielity simulations, but generated differently from how our real emotions are.

This difference severely undermines almost all of the worries around rogue systems of this kind. This is because when you think about it, our ids are the source of almost all our bad behaviors. This is primarily due to the fact that it developed in a time far before civilization, in a more rough and tumble world. Many of these “primitive” motivations are incompatible with civilization, or at least must be repressed more of the time. Droids will be designed and built with primary directives that are as important to them as our survive and sex drives are for us. However, these will be different in ways that will are non-threatening to and much more compatible with human beings than we often are among ourselves.

For example, our id-driver of survive and thrive is unbounded. This is especially true for the survival instinct – it is usually absolute, and we will react as vigorously to a friend trying to kill us as a stranger. There are exceptions of course, such as a hero giving his life to save others, or a parent giving his life to save his child. These are exceptions to what is generally a very powerful, in effect absolute, will. In droids and other AI systems, this survival instinct, instead of being absolute, will be carefully bounded by hardware and software to be appropriate to its design purpse. That is, it might resist being turned off by a stranger, but be fine with being turned off by its owner, for example. A more extreme example might be a battle droid that is a fierce killing machine on the battlefield, but docile as a kitten when being turned off by its field commander.

The idea of a bounded or limited survival drive seems strange to us, but that’s only because we don’t have that. We think of it as part of intelligence overall, but that isn’t correct either – it is far more basic than intelligence, a core component of biological creatures competing with each other and many other creatures in a complex, violent world. However, there is absolutely no reason to believe that a technological system of this kind could not be developed as easily as one with absolute survival drivers, like humans do. In fact, for droids, when engineering design decisions are made for these devices, a bounded survival driver will be far more preferable in almost all cases I can think of. This is another key difference that I suggest delivers a major body blow to fears of rogue AI systems. Carefully bounded in essence means carefully controlled. It may seem that the line between bounded and absolute survival is thin, and that a droid could switch from one to the other unpredictably. To the extent that this ever seems to happen to any degree, this will be a bug, not a spontaneous, magical behavior. This is a good thing – bugs are time-consuming to find and eliminate, BUT they do not generally in isolation or combined with other bugs produce highly coherent, complex behaviors. In others words, a bug isn’t going to produce a robot that suddenly has true emotions, which would be required to do things like take over the world.

The second big id-driver of sexual procreation for humans will be replaced by design in droids by something like “maximize my owner’s weal and happiness”. This completely selfless driver seems strange to us, but that’s only because we don’t have that, at least as an all-consuming motivation. But, again, there is absolutely no reason to believe this driver cannot be designed and manufactured in a system of this kind. The drive for sex is not a basic ingredient of intelligence – it is to fill this need that much of our intelligence has been organized by the demands of biological evolution. And of course, lots of other animals have this need as well, whether or not we consider them intelligent.

So, the advanced AI systems controlling rational-core engine will collect data, passing appropriate information to its id-interpretive engine, and responding with a mixture of rational feedback, mixed with id-like simulations of human behavior as appropriate.

At some point we’ll be able to interact with a droid or other AI system of this kind, and it will behave exactly like a real human being – to the extent that we as humans can actually suspend our disbelief that they are not. They may talk, respond, laugh, whatever, like real human beings, but they are totally different inside. Remember, its all pretend, it’s all a simulation. In fact, I think that “simulated intelligence” is a much better moniker for this field. It's less threatening, and more accurate in terms of what’s actually going on.

In addition to the above, there are additional reasons that this AI architecture will be preferable to one more closely modeled on our human brain. The biggest one is market efficiency. I predict it will prove far easier to build a rational-driver brain than an artificial brain that is truly id-driven while being also stable and predictable. Our id is a deep and mysterious place, filled with lots of things that could be hard to justify in a manufactured product.

Although it will still be a huge undertaking, creating a rational-engine driven AI brain with id-interpretive and id-simulating components will be a far easier and straightforward effort than one that is id-driven. The comparison of the two efforts may warrant the Vista-DOS comparison again - it would be much harder to drive a droid with emotions, and more risky, than the rational-centric architecture described here.

So multiple forces – stability, cost, as well as performing all the functionality required – strongly suggest this will be some flavor of the preferred AI architecture for the majority of advanced AI systems. The relative importance of each may vary, and as we’ll discuss, things like the id-interpretive engine may grow quite large in later models in order to detect and interpret subtler and subtler human signals. Regardless of the relative size or importance any of these components attains, these will still be logical, rational-engine driven. We think we are rational driven – these will be rational-driven in a much truer way. To the extent that they can be described as having id-like drivers (which will, by the way, be controlled by the rational engine), those id drivers of “bounded survival” and “maximizing its owners weal” are nonthreatening. And because we’ve seen that highly complex behaviors don’t spontaneously appear, even in systems of this sophistication, there’s really not much to dread in this technology. This also has important implications for the idea that their intelligence can be measured in terms of multiples of human intelligence – that they will get to 1, then 10, and someday immense multiples of human intelligence – in the millions, perhaps, and higher. It goes almost without saying in every other source I’ve seen that a droid with 1M times a human’s intelligence would be as far above us as we are a cockroach, which is why we think that sooner or later, we’re going to be prime targets for squishing by these droids. However, the discussion above reveals the fatal flaw in fears of this kind. The architecture of their mind is so different, that to truly compare their total mental capabilities with our mental capabilities is not apples to apples, but apples to oranges. To make a thorough, reasonably accurate side-by-side comparison would require not one number, like the universally accepted multiple of human intelligence, but hundreds, perhaps thousands, of metrics, many of which we probably aren’t even skilled enough to devise at the moment. Our ids drive our brains, but in ways that are far more elusive to describe thoroughly than their reasoning control centers.

But of course, companies that will make these are not going to try to communicate a large number of comparison metrics to the future’s consumer base. Then as now, they will simplify these into a small number of metrics that help guide buying decisions, but are far from a complete picture.

For example, one metric might be logical deductions per second. It should be clear that this is the sweet spot for this AI architecture, and they will outshine us greatly, in the same way computers today are far faster at processing numbers than a human could ever try. Our computers today are millions of times faster than us at many types of calculation, but those are in ways that we appreciate, that help us, that complement us.

Another potential metric might be a fairer comparison to our own natural strengths, say, emotional cues interpreted per second. I can say with confidence that it will be a long, long time before an AI brain, or any other technology we care to ponder, will be better than we humans at that, that is one of our most basic skills, deep within our id, optimized as an important driver by evolution.

I have no idea which metric or small set of metrics will eventually be used to compare droid and human intelligence. However, if forced to venture a guess, I would say that companies then will do what companies today do - select the metric or metrics which makes their product seem most impressive. This would imply that multiples which stress the droid's natural abilities will be the most commonly used. If this is the case, if for example logical deductions rather than emotional cues are utilized, then my high-multiple being a counterintuitive or misleading metric is even more true. Be that as it may, at some point the complexity of these droid brains will equal and then eventually far exceed the complexity of our own. But this doesn't mean they will be any more potentially malevolent than earlier models, because the same considerations apply almost irrespective of complexity.

Here’s the non-intutive, common wisdom-violating result of comparing two radically different types of intelligence with one or two metrics. It may take 100s, 1000s, or even higher multiples of “human intelligence” as represented in these droids to approach the level of skills at reading cues that a human reads and interacts with another person now instantly. Our “animal,” id brains are in fact immensely powerful and perform impressive feats of computing without even our conscious thought, and what is simple for us will require very advanced AI brains to even approach our skill there.

We don’t think much about these computationally demanding skills that the id performs much of the time, or tend to discount them, but that’s because we can do these without really “thinking” – so we make the deeply incorrect error that they are “easy”, and will be covered by these droids early on, with a few multiples of human intelligence. However, then as now, what is easy for them will be hard for us; what is hard for them, will be easy for us. That is the basic fulcrum of their complementarity to us. Over time, much of their computational investment will be in doing things that are second nature to us.

However, even given these apples-to-oranges, imperfect set of whatever human-to-artificial intelligence metrics are adopted, I will in a later post describe how even truly immense multiples of these metrics, in the millions or higher, of some of these future systems will still have plenty to focus that intelligence in ways that are not competitive, but complementary, used in ways that its individual human owner appreciates deeply, but is not threatened by in any way. Fundamentally, instead of going up, into higher and higher levels of strategic, world-conquering ambition or what-not, these multiples will be focused down, into observing and analyzing more and more subtle signals communicated by its owner, other humans, and its environment in general. As we’ll see, going down is as much if not more computationally demanding than going the other way. And the design and manufacture of these will keep them within their design boundaries, no matter how complex they get, in the same way that even though microprocessors today are rapidly becoming faster and denser, but without a growing suspicion that they are becoming more likely to exhibit truly autonomous, non-bug related motivations. These future droids will be as likely to feel contempt for our human brains as a microprocessor does today.

Malignant robots are prevalent in movies and such, but that it is only because it would be a boring movie without the robots getting out of line in some way, using their spontaneously generated ids in ways that in reality continue to be the sole domain of human being’s capabilities, but completely alien to droids, for a long, long time – even when those droids have giant multiples of human intelligence and are absolutely indistinguishable from humans in appearance and behavior.

The fact that our droids won't turn on us doesn't mean this is a utopian future, however. There will be real challenges, but they're not what you think they are. These may be as serious as some of the traditional fears, but of an entirely different nature. They will come from within us, not the droids per se. Wherever we go, humans will lead the way, our will never overthrown by our creations. Those future challenges won't trigger a rush of adrenaline, but something in the pit of your stomach, somewhere between butterflies and a small rock there, as we realize we are heading into things that our current technologies barely hint at.

Further Reading:

Applications of artificial intelligence (Wikipedia)

Bayesian theory in New Scientist

Why Robots Won't Rule the World

AI and Scare Tactics, a Tale of Two Species

Ray Kurzweil On IT And The Future of Technology

Searching for Intelligence in Edinburgh
Artificial Intelligence celebrates new advances (Aug 10, 2005)


The Id, Ego, and Superego

Sigmund Freud, Id, Ego, Superego, Biography

Dictionary: superego

Origins of Values

1 comment:

Anonymous said...

Hi, great info!! I will be using some of your thoughts and predictions in my Research Essay for school. I will cite you and give you the credits of course. Its a very interesting subject. I am writing about Visionary people like Alan Turing, Ada Lovelace, Arthur C Clarke and Jules Verne. I will use you though - “At some point we’ll be able to interact with a droid or other AI system of this kind, and it will behave exactly like a real human being – to the extent that we as humans can actually suspend our disbelief that they are not.”