Friday, September 21, 2007

Hyperprecise Sensory Awareness of the Artificial Mind


Let's explore how a device powered by an artificial mind would actually behave, what it would actually do. One important difference between AI and HI (human intelligence) is the explicitness of the thought processes. There are many things that require thought but apparently, for human beings, not conscious thought. This probably keeps us from going crazy, the subconscious, implicit nature of these thought processes.

When it comes to advanced AI and droid technology, on the other hand, one thing that it is important to understand, there are all kinds of clues as to how they will think from exploring the current nature of computers.

Let us examine first the nature of sensory inputs and awareness of the world in general, the differences there between biological and artificial minds. Many human translations are in fuzzy terms, some amazingly precise, such as another human face, most not, such as distances, shades of color, illumination intensity, etc.

For example, if you look at a tree from across the street, how tall is that tree? How far away is it? A human might say, looks like its about 40 feet tall, 100 feet away. However, a hyperintelligent droid would say, from a glimpse no longer than a human's, that the tree is actually 32.5 feet tall, 76.3 feet away. In other words, not fuzzy, but precise measurements.

Keeping droid sensory measurements in their native, precise state will actually be easier than translating this into human-like, fuzzy terms. And also far more useful. Why?

What is really happening here is that the world is being translated not into a fuzzy representation, but a precise model, that is then carried around in that device’s mind (which is, after all, simply an advanced computer) for reference or manipulation as needed.

Take the example of driving a car. Say you are in the desert southwest, driving over a crest on a hill, and can see miles down the road. That’s great, nice view, but you're not going to take your eyes off the road just because you saw all that road from the crest of the hill.

But, an artificial mind could do the following, if it was driving:

From the top of the same hill, with a glimpse that would seem short to a human, perhaps a second or two, build a 3D model of that scene.

Ignoring for the moment other cars, animals crossing the road, etc, dynamic elements like that. What that AI driver could do, is after that short glimpse, it could drive that entire stretch of road, maybe 10 miles, with its eyes closed, knowing precisely where it is along that stretch of road at all times, knowing how fast it was going at all times.

You could say, don’t open your eyes until the end of the 10 miles. But at 3.5 miles, slow down 10% briefly. At 7.3 miles, speed up 10% briefly, then resume your cruising speed.

And it could do all that, with its eyes closed the entire way.

So, first difference, the world is rendered into a precise internal mental model, as precise as possible, rather than our fuzzy model.

But, mostly this will be because it will be easier to do this, than simulate human ‘fuzzy’ sensory translation processes.

Another example: hearing.

Say a droid has been tasked with developing a talent at playing the piano. Using hyperprecise mental models, they could pick this up quite quickly.

And once achieved, it could be at a concert watching a great pianist playing a most difficult piece. After listening to it once, without ever seeing the sheet music, it could sit down and perfectly play the same number at the piano, even including the great pianist’s personal touches.

It could even do something very few humans could ever successfully attempt. Say the concert it was watching involved a well-tuned piano, as one would hope one would find at a concert.

But, the droid is asked to play on a poorly tuned piano. The droid could actually take the nature of the different tuning of this piano into account, and modify the piece appropriately to maximize its resemblance of the concert version on the out of tune piano.

A third example: smell

Humans have a sense of smell that isn’t bad, considering it is not our dominant sense. But a droid could not only have the sensitive nose of a bloodhound, but the analysis capability of a high-precision mass spectrometer.

Same with taste, they will be able to ‘taste’ what we do, but rather than fuzzy impressions of sweet, sour, etc, they will be capable of breaking down those tastes into precise molecular percentages, detailed composition analysis, and store this information for later retrieval, losing none of this detail upon recall.

Why will this computer-centric detailed precision of sense and recall be preferable? Two reasons: it will be easier to do this than to replicate the ‘fuzzy’ thinking of human sensory awareness. Second, there will be immense value in not replicating this fuzzy thinking in the first place.

Hyperprecise sensory awareness enables ultracoordination. Precisely detailed environmental models both enable ultracoordinated manipulation thereof, and also allow an extremely finely detailed feedback loop between what is observed and what is manipulated.

These are just a few examples of what will be the artificial mind's sensory capabilities. I'm not saying that the main purpose of advanced AI is to drive with its eyes closed, or play on poorly tuned pianos. Think not about the literalism of the examples, but the power such interpretation of the world could bring in terms of capabilities that humans could never emulate well (or without risk), but would nevertheless be of great value were it to be available.

There are many examples, here's another one:

Say you are visiting Carlsbad Caverns. You and your droid are on a tour, going deep into the catacombs. Then there's some problem, and all the electricity in the entire cavern system goes out. And no one has a flashlight.

If you've ever been in a cave where they turned off the lights for a moment, it is the blackest, most disorienting darkness there is.

But not to a droid. Even if you are miles deep in the catacombs, it could lead you out, retracing the exact steps used to come in.

This implies not only hyperprecise sensory awareness skills, but an internal compass-like sense of direction. An internal GPS may also be used, but it will be an augmenting skill to its sensory awareness tech. (a GPS system, even one quite precise, would probably not be precise enough to get you out of the catacombs on its own. But as an additional point of ref, internal GPS would undoubtedly be quite valuable in many situations).

All these capabilities do not assume sensory abilities really much different from ours. Although to the extent they will can fit within a hyperrealistically human form factor, senses with increased abilities could be accommodated. Really, its the way the data that comes in is analyzed, that's where the difference is.

We construct models in our brain, but they are mostly fuzzy, imprecise. Except for things like recognizing human faces, those are amazingly precise, honed by evolution to be a core competency of human thinking. This hyperprecise sensory awareness capability will be complementary to humans. Once again, the recurring them of complementarity, not competition.