Monday, September 17, 2007

Where is all the AI, anyway?

Before we get too excited about the coming of advanced AI, we must ask a serious question - where are even the simple forms in widespread use today?
Where is all the AI anyway
The thread "What AI will really be like" is the good news – that advanced AI is knowable, and will be non-malevolent. Now, for the less than good news – AI software maturity appear to be nowhere near the level needed to support many of the prognostications in this area. If you predict fantastic advances in 10 years, there should be at least some simple ones now, preferably with robust trends of consumer adoption, so the key predictive tool of "consumer behavior" can be employed to solidify understanding of what form(s) this technology will actually take.

One challenge with much of the treatment of this field is that we go from nothing to the most advanced applications imaginable – forms that seem exactly human, or even exceed human-level intelligence. There are few if any near-term, intermediate manifestations of this technology, only the ultimate culmination of it. If we can’t imagine a rich set of intermediate applications, we may be hard-pressed to make it to this ultimate objective. No other technology has gone from zero to super advanced in this way, so let’s not presume this technology is fundamentally different. Perhaps the opposite, in fact, with many more iterations that fall short of the holy grail of competitive with the human brain, but are nonetheless still quite useful in their own right.

The high-level architecture in the "What AI Will Really Be Like" entry would be more for a system that is intended to interact with and act like a human. I think most of us would consider this to be what AI means – to act and interact like a human with humans.
Although I feel simulated intelligence is a more accurate term for this field, let me use this term in a different way, so things don’t get confusing. I will use simulated intelligence to mean systems that exhibit behavior we might describe as intelligent, but not necessarily in a human way. This will usually, but not necessarily, mean a “simpler” form of intelligence than AI is usually taken to mean, but not necessarily.

How many of us interact with nonhuman systems, computers or otherwise, that we think of as “intelligent? Not functionally rich, intelligent. Even the most advanced, commercially available software, such as Windows Vista, is quite functionally rich – but does anybody, as they use this OS, or any other, think, “you know, this software sure is intelligent”? Honestly, with just about any system I've ever used, that thought hasn’t really crossed my mind, to admire its intelligence and I might wager that the same is true with most everyone else.

It's not really in our nature to admire the intelligence of others, with notable exceptions. If we have trouble doing that with each other, then manufactured products are going to find it even more difficult to have their smarts admired in this way. Because of this, it may prove fruitful to approach this from another direction - not making products "smart", but making them not be "stupid".

When we approach simulated intelligence in this way, there is a lot of insight to be gained. Though few of us have praised the smarts of any of our products, we have certainly cursed at them, been aggravated with them for this or that clueless behavior. There is more to work with when approached from this direction. We can glean a lot more in terms of business requirements, and when we achieve a minimization of stupidity in our consumer products, we have in fact found a way to increase their intelligence.

Let's turn the coin over and examine it in this way. What would it take for us to think of some of our current products, computer or otherwise, as “not stupid” (or at least as, not as stupid as they currently are)? Big question, lots of different answers, but in its simplest form, some criteria might be:
- Asks you a question once, and uses that answer to inform its behavior (at least when you’re using it) thereafter in ways that help reduce our effort to achieve our ends. As just noted above, these can be identified less by thinking of them as intelligent by the presence of this behavior, as stupid and eventually highly irritating in the absence of this behavior. A top of mind example is when you plug in an external disk drive to your computer, up comes the prompt, “what do you want to do, copy the pictures, play the music, etc, or do nothing.” You reply, saying, “do nothing,” and click the box that says, “do this action every time”. Yet, every single time you plug in that drive again, or any other, you have to answer that question again. This is a simple example, and there are already lots of confirmation and preference dialogues in many OSes, but this type of “intelligence” is really to us obvious “lack of intelligence” by its absence, or especially, when it asks, you give it the answer, and it forgets, does it a different way, or otherwise differs from your clearly stated preference. In this perhaps simplest form of intelligence, the system asks you a question, you give it your answer, and they system does it that way for you, every time, in every way that seems reasonable. Occasional reiterations of the question are OK, as long as in most ways we feel like the system "gets it" in terms of our preferences.

- To move on to a higher level of intelligence not in common use today, in combination with asking questions, the system observes your behavior over time, asks more comprehensive questions, and makes conservatively realistic preference deductions of a broader scope from that information. This can still be far from a human-like level of deduction capability. To pick an example familiar to myself, in image processing software, individuals process images in a number of different ways, sharpening, saturation, color, etc, in a way that over time that individual can see similarities in the way that they process similarly pictures with similar characteristics, low-res, high-res, orangey, red, etc. an image processing software suite that could detect these as well, possibility with confirmation from the user, could in effect proactively process large numbers of digital images or movies without the

This higher level of intelligence is more complex, and very rare in currently available systems, to the best of my knowledge. In the image processing example, after a session where you have processed a large number of images in different ways, individually, batch script, etc, at the end of the session, the software asks, remember how you processed images like these? You say yes, and perhaps an additional button, “apply my processing preferences” to an image or images thereafter, and it applies not only its functionality, but in the way you have tended apply it in images with similar characteristics, eventually in a way where after clicking that button, you say, wow, this image is prefect, I don’t need to adjust it any more.”

- Is this behavior intelligent yet? Well, its getting any case, its damn convenient, saves lots of time, and makes the user much more productive, possibly much more.

Clearly, we’re not going to be regularly praising the smarts of any non-human system until they routinely show more impressive powers of deduction that we typically do. I will describe those in another thread, but may require something like the AI architecture in the previous thread. However, well before that there are an exceedingly rich set of potential SI applications that will deserve at least the consideration of implicit intelligence simply because we will not think of them as aggravatingly stupid – in other words, seemingly aware of what we’re doing, and doing something that makes sense to us, aside from a mindless, rote schedule.

It hopefully seems clear that these simple SI applications, while a bridge to more advanced AI type of behaviors, are obviously still conventional computer software and hardware type technologies. Some of the simpler forms we’re familiar with, others we’re surprised or annoyed that they don’t exist, regardless of the true reasonability of that expectation.

This is an extremely cogent point, intelligent behaviors will evolve, probably gradually over the next few decades, in ways that build on the technologies in current use, that we are already familiar with. There will be discontinuities, but they will be the exceptions to the rule that we already live with every day - technologies progress gradually, in ways that are seldom jaw-dropping, but nonetheless quite useful in the long run.

Other than the asking of simple questions, and collecting detailed preferences, each explicitly communicated by the user, hardly any of the others are apparent in any widespread system that I can think of at the moment (give me some examples, if you have, very interested).

So I ask, where’s the AI? We’ve been hearing about AI for decades, and we’ve tried different forms that had limitations of one kind of another – expert systems, for example. I’ve heard of some promising directions in this field based on advances in understanding the human brain, and we’ll see where those go. But again, where is it, in terms of widespread systems?
Where should we be looking for it, in the first place?
Naturally, droids of the kind I will describe will certainly have a strong incentive to have sophisticated, seemingly overtly intelligent, software. This is rather obvious, but this also will be among the most advanced flavors of this type of technology, so we’re looking at the finish line trying to see who wins the race, when the starting line has barely (or even) been crossed yet.

One place I would suggest would seem like a reasonable place to expect to find these early on is in computing operating systems. Ill pick on windows, since I’m the most familiar with tat, but the same comments apply to all the ones I’m aware of.
Why isn’t Vista more intelligent? I'll defer to Microsoft for the real answer, of course, but here are some reasonable attempts to answer that. One answer might be is that it isn’t necessary. That’s possible, but I would suggest unlikely. A more realistic tentative answer might be that there is so much to do in terms of developing and integrating requests from users for what are more straightforward functionality improvements. Also, things like making the underlying software infrastructure more accessible to third party developers, improved handling of graphics, etc.
Doing those things already require massive numbers of brilliant developers, working together in tightly integrated teams to produce exceedingly complex software that is backward compatible, thoroughly tested, etc. each iteration of the OS requires many years of effort by those teams to get the next version out. significant delays are not uncommon, which implies that even this traditional mode of large scale software development is becoming so complex that it is very difficult to estimate the time to complete with much accuracy.
In addition to those very sound reasons, there may another, less obvious one. It is conceivable that the techniques of developing intelligent software behaviors, such as advanced, implicit preference identification and extrapolation, are to some degree orthogonal with the techniques of traditional software developers. OSes versions, like all software versions, have a large amount of momentum, and the software developers for that software have techniques that they are comfortable with, and extend in a predictable way. Adding to the difficulty is that any intelligent behavior, if it requires significantly different developer skill sets, must be smoothly integrated into the much larger body of the functionally-oriented software.

So, there are several possibilities, which apply across the board to just about any software application from any company:
- Few behaviors were requested by the user community that could be described as qualitatively different in terms of intelligent, as opposed to simply more functionally rich.
- Intelligent behaviors were considered, deemed too complex, and deferred
- Intelligent behaviors were considered, attempted, and dropped because they were too complex (I think this happened with vista once or twice, not sure)
- Most user requests were communicated in the form of functional enhancements based on the current, known version (the menu does this, I would like it to do something additionally)
- Intelligent behaviors have been collected, and some kind of AI-supportive infrastructure is being developed or considered, and will be forthcoming at some point.

To some extent some or all of these may be true. However, it is my suspicion that for most software being developed today, neither the users in their requests, nor the development teams in their enhancement processes, are really even thinking in these terms to much extent. In other words, the vast majority of momentum across the entire software industry and almost their entire user base, conceptualize and execute software in terms of functionality enhancements or modifications.
Functionality improvements are of course highly important and exceedingly challenging , I'm not suggesting anyone’s slacking anywhere. However, I would cautiously suggest that, really, very few are even thinking in these terms in the first place, ie, intelligence enhancements, so I'm not sure we really know how valuable or not they would be if carefully selected and well executed.

A very good countering thought is, at some point does the current era of functionality enhancement software sets morph into recognizable "intelligence" incorporation? To some degree this may turn out to be true. But I would suggest not necessarily, in and of itself, without modifying the way software development is considered and executed.

In any case, the more fundamental doubt I would express is that there doesn’t seem to be a truly flexible, adaptable, and powerful AI architecture out there at the moment that would be a good starting point for incorporation into our current portfolio of conventional software apps.

So my answer to the blog titles’ question, "Where is all the AI, anyway?":
At least in terms of widespread applications, its simply not there yet. It has been a long time, 40 years or so, and has evolved through many forms of very specialized utility, but it is still not to a point of general applicability. I suggest that this is due a couple of factors. It is a very, very tough problem, and deserves great respect for that difficulty, and the most obvious application, human-like interaction, is the toughest. And to a lesser but still significant degree, we’re not really thinking in those terms, we think in terms of functionality creation and enhancement. Of course, the development of a general purpose AI architecture will most likely lead to slowly changing attention to intelligent behaviors.

That will be a long process, perhaps taking decades, or maybe just years. Its difficult to estimate, because its hard to say where we really are in terms of this technology, without at least one widespread, commercially available product to study and understand where we are now.

My point is, AI gets some breathless coverage, because it is quite exciting. Some of the current research is more promising than in past decades, based on our improved understanding of the human brain. However, it is important even for an optimist to be open-eyed and bare knuckled, in order to possess an optimism able to withstand the light of objective evidence. We should respect the immense difficulty of this space, and be patient – it may be a while before these kinds of technologies are just around the corner, in terms of being in a store, just around the corner (or on the Internet, you know what I mean – commercially available).

Because it is so tough, it is imperative that research be pointed in the directions most likely to yield promising results. The choices don't differ in their results by a year or two, but perhaps decades.

Further Reading:

Why A.I. Is Brain-Dead, interview with Marvin Minsky

The Singularity Cometh? Or Not?

No comments: