Friday, March 13, 2009

AI Rational Analogues for Human Emotions

There is great intelligence in human emotions, and unfortunately, great unpredictably as well. One of the keys to safe AI will be to translate human emotions into rational analogues that are far more consistent and predictable in their operation.

I would suggest that, far from being the exception, it is almost universally assumed by most people that advanced AI, as it gets closer to nominally human levels of intelligence, will start evincing all or most of the characteristics of that human intelligence. I disagree strongly with this idea in many specific, self-consistent, logical, and empirically-based ways.

However, I want to share another take on this notion that I have alluded to, but not explored in as much detail as perhaps desirable. Let's focus on emotions. Very interesting topic, have had a lot of good discussions here around that.

Humans have emotions, and we consider them an integral part of what makes us special. However, emotions are almost universally shared with most other complex creatures, so really, in important ways they are the characteristic of our minds that are the most ancient and least special of all, at least in the sense that other animals also employ them in very similar ways.

However, the key question for my purpose is, are emotions as we conceive of them really necessary for an advanced AI to successfully achieve and evince hyperintelligent (ie, greater than human intelligence) behavior? I would say, a rational understanding of the emotions of others is critical, especially the primary human interactant(s) with the AI.

Clearly, emotions serve a useful purpose, which is why we have them. Let's examine a very useful one: fear.
http://en.wikipedia.org/wiki/Fear
"Fear is an emotional response to threats and danger. It is a basic survival mechanism occurring in response to a specific stimulus, such as pain or the threat of pain. Psychologists John B. Watson, Robert Plutchik, and Paul Ekman have suggested that fear is one of a small set of basic or innate emotions. This set also includes such emotions as joy, sadness, and anger. Fear should be distinguished from the related emotional state of anxiety, which typically occurs without any external threat. Additionally, fear is related to the specific behaviors of escape and avoidance, whereas anxiety is the result of threats which are perceived to be uncontrollable or unavoidable."

As with many emotions, below a certain level of intensity fear is not incompatible with rationality. However, as with many or most emotions, above a certain level of intensity it can completely obliterate rational thought processes. This makes them unpredictable. This is critical for consideration of advanced AI. Advanced AI cannot be unpredictable in the same way we are.

Let's examine another emotion: Anger
http://en.wikipedia.org/wiki/Anger
"Anger is an emotional state that may range from minor irritation to intense rage. The physical effects of anger include increased heart rate, blood pressure,and levels of adrenaline and noradrenaline. Some view anger as part of the fight or flight brain response to the perceived threat of pain. Anger becomes the predominant feeling behaviorally, cognitively and physiologically when a person makes the conscious choice to take action to immediately stop the threatening behavior of another outside force. The English term originally comes from the term angr of Old Norse language. Anger is usually derived from sadness.

The external expression of anger can be found in facial expressions, body language, physiological responses, and at times in public acts of aggression. Animals and humans for example make loud sounds, attempt to look physically larger, bare their teeth, and stare. Anger is a behavioral pattern designed to warn aggressors to stop their threatening behavior. Rarely does a physical altercation occur without the prior expression of anger by at least one of the participants. While most of those who experience anger explain its arousal as a result of "what has happened to them," psychologists point out that an angry person can be very well mistaken because anger causes a loss in self-monitoring capacity and objective observability.
Modern psychologists view anger as a primary, natural, and mature emotion experienced by all humans at times, and as something that has functional value for survival. Anger can mobilize psychological resources for corrective action. Uncontrolled anger can however negatively affect personal or social well-being. While many philosophers and writers have warned against the spontaneous and uncontrolled fits of anger, there has been disagreement over the intrinsic value of anger. Dealing with anger has been addressed in the writings of earliest philosophers up to modern times. Modern psychologists, in contrast to the earlier writers, have also pointed out the possible harmful effects of suppression of anger. Displays of anger can be used as a manipulation strategy for social influence."


As with fear, below a certain level of intensity, anger has a useful purpose for human beings. However, above a certain level, it often precludes rational thought, and is therefore unpredictable.

A simple example of the inappropriateness of anger as a controlling mechanism can be given if we assume the form factor of a droid for an advanced AI.

If you are the human interactant, and depending on your disposition, you may strike this droid. What should be the droid's response? Well, one response that is NOT permissible is for it to strike you back and knock you through the wall (I expect that these devices will be a good deal stronger than a person, as well as faster, these will be a key part of their value proposition).

Now, maybe the human deserves to get thrown the wall, maybe he's a real a**hole, but that is irrelevant. A device that fights back in this way is unsafe, the manufacturer will get sued, and therefore this type of unpredictable control mechanism will never be designed into a device of this kind. Product safety is a huge and important trend now, and that will continue into the future.

Advanced AI, no matter what its form factor is, can never leave the zone of rational control of every action it takes. If you think about it, this makes perfect sense. Human beings that keep their rational composure, even when someone is attempting to provoke them (this board has lots of examples of this, you don't have to look far), those people that stay in control of their emotions usually stay in control of the situation.

Going back to our example, how should a droid respond in this case? What is the "rational analogue" for anger in this device, in other words? Well, it depends on the situation. If it is responding to its primary human interactant(s), the response should be neither slavish nor intimidating. It should be the response that maximizes the maintenance of respect felt by that human for that device. If this human tends to strike often, the response should perhaps be of a shaming nature, to discourage that human's propensity to strike others, which obviously is probably beneficial for that human anyway. If it is extremely rare, it depends on the circumstances, another potential response might simply be laughter, laughing it off. It is highly unlikely that a human is going to be able to actually injure one of these devices simply by striking it, so the response of the droid can be formulated entirely with the human in mind.

For essentially every emotion, I suggest a "rational analogue" can be envisioned, that fills the role of that emotion, and in fact does it better than that emotion, at least in the context of advanced AI.

How about sadness?
http://en.wikipedia.org/wiki/Sadness
"Sadness is an emotion characterized by feelings of disadvantage, loss, and helplessness. When sad, people often become quiet, less energetic, and withdrawn. Sadness is considered to be the opposite of happiness, and is similar to the emotions of sorrow, grief, misery, and melancholy. The philosopher Baruch Spinoza defined sadness as the “transfer of a person from a large perfection to a smaller one.” Sadness can be viewed as a temporary lowering of mood (colloquially called "feeling blue"), whereas depression is characterized by a persistent and intense lowered mood, as well as disruption to one's ability to function in day to day matters."

This seems very unlikely to be a valuable control mechanism for an advanced AI. Witness the personality of Marvin the Robot from Hitchhiker's guide - that behavior would be excruciating in an actual droid, and would not be successful, I would suggest.

However, here's a very interesting emotion: Empathy
http://en.wikipedia.org/wiki/Empathy
"Empathy is the capacity to share and understand another's emotion and feelings. It is often characterized as the ability to "put oneself into another's shoes", or in some way experience what the other person is feeling. Empathy does not necessarily imply compassion, sympathy or empathic concern because this capacity can be present in context of compassionate or cruel behavior."

Now we're getting somewhere. This will be very important for successful advanced AI. However, this can very easily be approached by an equivalent rational analogue, whereby observation of its human interactant(s) over time provides a rich topographical picture of that person's personality, and hence a rational strategy for empathy devised that is not too shallow, or too indulgent. I suggest a convincing evincing of empathy will be an important part of droid hyperintelligence. But again, it is not "true" empathy - it is evincing of this emotion, for the benefit of its human interactant(s), to provide comfort to them or whatever.

Here's another vital emotion: Curiosity
http://en.wikipedia.org/wiki/Curiosity
"Curiosity is an emotion that causes natural inquisitive behaviour such as exploration, investigation, and learning, evident by observation in human and many animal species. The term can also be used to denote the behavior itself being caused by the emotion of curiosity. As this emotion represents a drive to know new things, curiosity is the fuel of science and all other disciplines of human study.

Causes
Although curiosity is an innate capability of many living beings, it cannot be subsumed under category of instinct because it lacks the quality of fixed action pattern; it is rather one of innate basic emotions because it can be expressed in many flexible ways while instinct is always expressed in a fixed way. Curiosity is common to human beings at all ages from infancy to old age, and is easy to observe in many other animal species. These include apes, cats, fish, reptiles, and insects; as well as many others. Many aspects of exploration are shared among all beings, as all known terrestrial beings share similar aspects: limited size and a need to seek out food sources.

Strong curiosity is the main motivation of many scientists. In fact, in its development as wonder or admiration, it is generally curiosity that makes a human being want to become an expert in a field of knowledge. Though humans are sometimes considered particularly very curious, they sometimes seem to miss the obvious when compared to other animals. What seems to happen is that human curiosity about curiosity itself (i.e. meta-curiosity or meta-interest), combined with the ability to think in an abstract way, lead to mimesis, fantasy and imagination - eventually leading to an especially human way of thinking ("human reason"), which is abstract and self aware, or conscious.Some people have the feeling of curiosity to know what is after death.

Morbid Curiosity
A morbid curiosity is an example of addictive curiosity the object of which are death and horrible violence or any other event that may hurt you physically or emotionally (see also: snuff film), the addictive emotion being explainable by meta-emotions exercising pressure on the spontaneous curiosity itself. In a milder form, however, this can be understood as a cathartic form of behavior or as something instinctive within humans. According to Aristotle, in his Poetics we even "enjoy contemplating the most precise images of things whose sight is painful to us." (This aspect of our nature is often referred to as the 'Car Crash Syndrome' or 'Trainwreck Syndrome', derived from the notorious supposed inability of passersby to ignore such accidents.)"


Curiosity is a fundamental tenet of awareness, and forms the backbone of much scientific and technological progress. It is a very valuable emotion, and I would argue that a great deal of human rational intelligence evolved as a way to inform our curiosity more successfully. Therefore, although an "emotion", it is among the most rational of emotions, at least if not taken to extremes (as in the description of "morbid curiosity" above).

Anyway, I'm still figuring this out. Bottom line, what I'm suggesting here is that the implicit, unpredictable, somewhat mysterious operations of emotions in human and other animal brains have "rational analogues" that can be explicitly programmed, are far more predictable, and hence safer for advanced AI.

These rational analogues will form a natural extension of what I have argued elsewhere will be the rational control mechanisms for advanced AI/droids overall. The inherently rational control of these technologies argues for the knowability in their behavior patterns, even when they reach the point of "hyperintelligence", ie, demonstrably greater rational intelligence than humans. This is my central argument against the "unknowability" of the Singularity (actually, one of several, another being the continued involvement of human beings in their design objectives, for example). It is not entirely clear because it's never been explained by anyone to my satisfaction, but this notion of the "unknowability" of the Singularity seems to derive primarily from the idea that they will have essentially a biological nature to their intelligence, which I suggest is not necessary, safe, or really in any way optimal that I can imagine - except that this notion appeals to our heartstrings, which of course not a rational line of argument.

1 comment:

Anonymous said...

Couldn't an emotion like 'fear' be considered an amalgam of a lot of different biological processes?
Say a droid is, oh, mining somewhere. A rockslide; the droid is in a great deal of danger. It is given priority to find escape to the nearest safest spot; emphasis is given to those sensory perceptions which would allow it to detect the most prominent threats; its traveling speed is temporarily allowed to go above its normal parameters, for purposes of this emergency.
This is a fairly ridiculous example, but you get my point. Don't all these things, together, constitute the AI equivalent of the biological processes which compose fear (increased adrenaline, heightened awareness, flight response)?
And of course this is all precluded by human ingenuity and persistence; among those who design and develop artificial intelligence, there will be those who seek to emulate emotion simply for its own sake -- though I'd put my money in its utility for game design, if nothing else.