Wednesday, October 3, 2007

What The Singularity Will Really Be Like


The point at which humanity is able to create an advanced AI that is more or less equal in complexity to our own fills us with both elation at the possibilities and dread at the uncertainties, as I have noted here. The Technological Singularity essentially says that as this level of advanced AI approaches that of one human intelligence, and when instructed to improve upon itself, that will lead to an open-loop, runaway intelligence explosion that will leave, as one reputable source puts it, “humanity far behind.” This common conception of being "left far behind" is due to the equally common belief that the nature of improvement of advanced AI intelligence will be upward, into greater and greater strategic planning and implied world dominance, as opposed to downward, into techniques that will help them be aware of their environment and interact with humans more effectively. The case in this blog is that it will be focused down, because that is the direction of maximum value add to human consumers and businesses.

There are Singularity conferences, and futurist and technology forums hum with what The Singularity will mean for mankind. These discussions are invariably non-specific, and center around either vague elation at a benevolent super-AI taking care of our every need, sort of a God-figure, or vague dread at a malevolent AI, that would enslave or exterminate mankind. With all that discussion, however, there is little exploration of how realistic the Singularity concept actually is, or what a real Singularity would look like on the real planet Earth.

It is important to remember that the Singularity is not an empirically derived extrapolation of current trends; it is a science-fiction concept, developed by Vernor Vinge and later appropriated by Ray Kurzweil, who in typical fashion took an armchair speculation and said, well, now it's real, now it's really going to happen, based on the progression of Moore's law primarily.

This entry will engage the Singularity in a specific and constructive way. There are several challenges with the Singularity concept which are consistently evaded by the true believers that I try to submit them to. These are:

A. It makes the implicit assumption that our future selves will simply turn our backs while this technology performs a runaway intelligence improvement loop.

B. It gives no idea what is meant by "greater intelligence".

C. The sacrosanct "unknowability" of the post-Singularity world is based on the implicit but strong assumption that this hyperintelligence will be driven as our human minds are - ie, irrationally. If this hyperintelligence is in fact rationally controlled, this unknowability conceit dissolves, there's very little evidence to support it.

I don't think the Singularity as currently envisioned is likely, for the above and other reasons, but neither can I rule it out. However, we can put some very real and reasonable boundaries around this "unbounded" event, that make the Singularity (or what I call a "steeply-sloped event") far less unknowable in absolute terms.

The perspective of this blog is that any any hyperintelligence “explosion”, should that scenario prove to be realistic, will take a finite amount of time to complete (not vanishingly small), for the following reasons:

1. Unless the hyperintelligence is infinitely great, the time to solve any problem will be finite, not instantaneous.

2. Even if the design, theoretical, and scientific problems can be resolved in an instant by one of these advanced droids, the engineering actualization will still be of finite duration. In other words, unless these droids are capable of the instantaneous transmutation of matter, factory lines will still need to be retooled, etc, to create the more improved design.

3. These will take the form of consumer and/or business products, and people and businesses have finite abilities to absorb new offerings (even in the future).

4. Humans will still be in charge. For the purposes of this unbounded improvement cycle, even assuming maximum autonomy by the advanced AI tech, humans will still at the very least select the feature sets that are improved in this way.

5. The form of the droid intelligence will be such that self-motivated autonomy will be as alien to them as it is to PCs today – regardless of the level of that intelligence, it’s nature is fundamentally alien to these type of id-driven characteristics. So not only will they not be selecting their own feature-sets to improve, they won't be selecting the problems to be solved with the resulting intelligence "product" - humans will.

6. The nature of the improvements to the identified feature sets will not be a computationally bound problem, but one requiring intense interaction between humans and droids. In particular, the deepening sophistication of hyperobservancy and/or super-subtlety skills.

There are a number of reasons to believe that a improvement loop of this kind will still be of a finite rate of advance and time to fulfill. However, to maximize the vividity of this scenario, we will assume that a significantly advanced advanced AI intelligence tech can improve upon previously identified feature sets essentially unaided, and do so quite rapidly.

Therefore, I won’t assume infinite progress, but something very substantial. Say, when a droid intelligence reaches about 1 human intelligence, it goes through a self-improvement cycle from that level to, say, 1 million times greater in a product that can be actualized in a year. Not infinite, but very substantial. The 1M number is somewhat arbitrary, it can be bigger if you like

For a starting point, I will use the AI architecture I have already described. Not only is this the most realistic one, if you read the trades carefully you will see that it is the only kind ever mentioned. This is simply the most straightforward way to achieve hyperintelligence with today’s computer hardware and software infrastructure.

The likely advanced AI arch will initially be strong in the area of rational and logical reasoning, since that is the core strength of today’s computer architectures. In fact, it is hard to get them out of that sweet spot without difficulty.

These early droids will be good enough to do many chores, basic communication, etc. However, they’re lack of id-simulation sophistication will make them limited in their imagination actualization qualities, ie, we will have no doubt that we are hanging with a droid, regardless of how human it may (or may not, initially) appear.

Therefore, this big swath of functionality will be nascent in the earliest droids, and a definite target for improvement.

Now, as I’ve described in the Envisioning the Hyperintelligent Droid article, these sensory and behavioral sophistication will be manifested as the “superpowers” of supersubltely and hyperobservancy. There will undoubtedly be others, but the value of these are so strong that they will be part of the feature profile of these devices.

I didn’t note this in that thread, but its important to say that one of the most valuable uses of hyperobservancy will be in the detection of super-subtle signals. Even more important, the accurate interpretation of these signals, not just noticing them in the first place. This doesn’t just mean that when the owner twitches his eye in a certain way, it means he wants lunch; it also means the correct identification of subtle meanings hidden in even quite dramatic and well-known events.

Hyperobservancy of super-subtle signals, especially the kind communicated by other humans, is much more a core human skill than droid skill. Even so, most of us do that poorly, I would suggest - if you ponder silently how many times you have misread someone or something, you may concur. So droid tech has quite a challenge cut out for it. It must not only be able to detect these signals, but accurately interpret and act on them correctly, almost all of the time – say, eventually with a success rate above 99%. As I noted in the droid thread, that in effect forms an infinite sink for HI multiples.

I’ll bet you can see where I’m going with this. I will assume that even if this putative, largely unsupervised intelligence self-improvement explosion were aimed at this problem (the one thing we humans get to do in this scenario, name the feature to be improved) and set off with a cycle of progress equal to the force of a nuclear explosion, the main effect would be to rapidly improve these 2 skills, along with perhaps others I haven’t named yet.

And yet, what is the effect of rapidly improving these skills? It makes them even more compatible with, complementary to humans.

In other words, the hyperintelligence explosion will not leave humanity “far behind”, rapidly diminishing specks in the rear view mirror of droid autonomy. (The droids lifting their shades slightly, to smirk at their stranded creators, while we frantically wave our arms for them to come back with our car.)
It will only make them more compatible, more complementary with us, faster.

So even in this most unknowable of all unknowables of some future models, the intelligence explosion, does not leave us behind but draws us closer together, using the architecture described in this blog. And this architecture is simply the one that will be built from the starting point of the current computer hardware and software industry.

This architecture is the one being realized, and therefore the scenario above is the most likely result of any intelligence explosion.

2 comments:

Anonymous said...

you are way more educated than i but i like you spent a lot of time in left brain. i have spent a lot of the last twelve years trying to use both as you seem to.
i've given a lot of thought to the singularity and i feel it is our destiny.
if the wright brothers had failed to fly flight would not have been lost forever. flight existed as a potential as do all things that are possible. at their time the potential was timely with light materials and a powerplant. space travel was not timely. as an ee you see that probably there is a potential nearby for ai. (as do I) i believe that all potentials are our destiny like easter eggs hidden for children. we think we create or invent things when in reality they are there for us to find. things that are not for us will not be potential. sounds like a creationist don't i. i do believe that these stepping stones have been here for us all along. flint, stone wood, peat, coal, oil, metals, electricity, nuclear,, chips, all of them leading somewhere unknown. but along the way a smart machine is in our path. good or bad it is our destiny. it will be done by us or species that evolve after us and it has happened on other worlds already. those worlds are already part of a community we will be a part of. distance may preclude a physical meeting but information will pass and we will be togther as if watching movies a year or two old on hbo. zombiefood sb@toast.net

Anonymous said...

Well that is sort of how Kurzweil outlines it. If I remember correctly, he similarly takes the stance of specifically rejecting this 'leaving human intelligence far behind' notion, and talks (albeit idealistically and without a lot of specific grounding) about how these developments would be simultaneous to both robots (droids, if you like) and humans, due to conceivably being able to alter our own biology/minds by that point.