The basic ideas and trends behind the Hyperreality Engine.
The "VirtuSphere" is not exactly what I had in mind when I envisioned the Hyper-Reality Engine, but it's intriguing (the link is several years old, I have no idea where this product is at the moment)
The entries in this blog reflect a realistic, if aggressive, vision, to the extent of my knowledge. Regardless of how long it takes, I have no doubt that there will eventually be some amazingly sophisticated manifestations of AI on the marketplace.
As I’ve suggested one major future trend in consumer technology will be “imagination actualization”. This means a lot of things, and droids have an important role, which I describe in another thread. But first, let me describe another kind of advanced imagination actualization set of applications that are perhaps more recognizable in terms of present offerings. These could be available relatively sooner than the droid techs I will describe. However, it could still take a while – these might take a quarter century or more to become available, chiefly because of the massive processing power they will require.
Lets start with the familiar, and go from there, in terms of an increasing level of imagination actualization. I’m a big video and photo enthusiast. I had a traditional analog camcorder, and that was great. However, when my daughter was learning to walk a few years ago I upgraded to a digital camcorder, and the light went off for me in terms of the power of digital imaging over traditional analog techs. In addition to watching the video, you can make movie clips for the computer; you can make video captures of each frame in the movie, and process it like you can any other image.
This camcorder had a 1 MP still camera, which I thought was cool, but when I bought a 4MP camera a couple years later, I was blown away again. When I viewed these on a computer monitor, especially a large, hi-resolution screen, I was again amazed – they were so real, I was there again. The vibrancy of these high-resolution images on a computer monitor to me is much more compelling than when printed out, which is why I would suggest traditional film companies have cause for concern if their revenue models continue to be based on photo-printing revenue.
In fact, the qualitative difference between these digital imaging technologies and the older analog and still film technologies is almost infinite, in many ways. Especially for digital still cameras compared to film still cameras, an individual picture is nearly costless. In the past you might have to carefully managing 5 rolls of film with 35 pictures each for the film camera, send them to get processed, and if some were fuzzy, you would moan, because not only did those cost money, and they might be of one of the subjects you really wanted to keep. you can take hundreds of pictures with the digital, multiple pictures per subject in burst mode, and if half of them are blurry, you simply toss those out without concern about the cost, and usually at least one good pic of each subject that you were interested in.
And my latest acquisition, a true hi-def camcorder, 1080i, is another amazement to me. Not only is the movie resolution stunning, the individual video captures are an order of magnitude greater than traditional digital camcorder vidcaps. So in a busy scene, you can simply point the camera, take hi-rez video, high-quality, and take 30 high-quality still pictures per second, while also being able to take a small number of higher still pictures at the same time.
So that’s cool. Now, what can you do with the images and movies from these? Well, you can watch the movies, you look at the pictures, you can process those pictures, and you can process video captures from the movies like you would any other picture.
That’s great, what then?
For now, pretty much, I think. However, analyzing these technologies against the theme of imagination actualization can get interesting. What we mean here is, how can I use these movies and images to help me actually imagine that I am really there again? Make that scene as vivid as possible, explore it, interact with it in increasingly more immersive and imagination-firing ways?
There are several levels to these possibilities, and the first is already receiving some commercial software offerings. In the context of a still image, the most straightforward idea for making it even more immersive is to make it seem 3 dimensional, and be able to “walk around” within it. Photowoosh (http://www.fotowoosh.com/) has an offering like this, a beta version of which will be forthcoming soon.
Another early but promising incarnation is from Microsoft (if Microsoft is involved, you know your predictions “got game”). Their Photosynth offering http://labs.live.com/photosynth/. I like the tagline from that site: “What if your photo collection was an entry point into the world, like a wormhole that you could jump through and explore…”
Something like that, yes.
Exploring your images is one thing, animating them, bringing them to life, is another.
What if you only have one old, crumbling photograph from decades ago, and it is the only picture of your great-grandmother, who you barely knew but everyone said was a saint?
What if the only voice recording you have of your mom is a few seconds, and she was simply laughing or something?
How can we reconstruct the past and actualize our current imaginations?
The AI-rich vr system must, like our droids, be outstanding at deducing super-subtleties via hyperobservancy.
This means getting the most information out of every picture, and/or movie frame, and being able to augment those with fine-tuning imparted by the memories of those who knew that individual, or who were there, if applicable. It also implies having a large database of human movements, voices, accents, etc, to provide the raw fuel for convincing interpolations and/or extrapolations.
No comments:
Post a Comment