tag:blogger.com,1999:blog-31219090893978963912024-02-02T13:42:26.173-08:00The Empirical FutureIntegrated analysis of technology trends, human psychology, consumer and market dynamics, ethical perspectives and legal trends to discern the probable futureConsultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-3121909089397896391.post-89569904365887653962011-01-16T08:20:00.000-08:002011-01-16T08:28:14.472-08:00<img style="display:block; margin:0px auto 10px; text-align:center; width: 400px; height: 263px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB9tyCHW5it3wahX2jPgy7M0L0Ns47HcRsRzF5htD_zgRCwaCyVdVuSjQVUA7W4lz6yKAU9g1gFOMpP-reGgApvN2ENgwoxhEI_1QGhOklGudGW_z4HswbW8ir7Hl9SsXJ74nQ-F1-ynk/s320/Empirical+Future.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5523481458269196370" /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/future-prediction-process.html">Introduction to the Future Prediction Process</a></strong><br /><span style="font-style:italic;">Going beyond the standard "wishful thinking and sheer imagination" approach to futurism in order to discern the most probable future.</span><br /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/artificial-intelligence-technology.html">Advanced Artificial Intelligence Technology</a></strong><br /><span style="font-style:italic;">The good news is that advanced AI technology will be amazingly useful, not at all malevolent, and quite knowable. The other side of the coin is that it will be among the most difficult technical feats ever achieved by mankind.</span><br /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/robotics-technology.html">Advanced Robotics Technology</a></strong><br /><span style="font-style:italic;">Forget C3-PO and R2-D2. Stunningly realistic droids of the future - both physically and psychologically - will be so compelling that we will consider them indispensable.</span><br /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/virtual-reality-technology.html">Advanced Virtual Reality Technology</a></strong><br /><span style="font-style:italic;">Powered by a specialized form of advanced AI, the Hyperreality Engine will go far beyond your current notions of ultra-realistic, immersive experiences.</span><br /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/far-future.html">The Far Future</a></strong><br /><span style="font-style:italic;">The far future of humanity, the dark side of the future, and where all all the aliens, anyway?</span><br /><br /><strong><a href="http://predictionboy.blogspot.com/2009/03/analysis-of-future-concepts.html">Analysis of Other Popular Ideas About the Future</a></strong><br /><span style="font-style:italic;">Analysis of other widespread notions about the future.</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com10tag:blogger.com,1999:blog-3121909089397896391.post-75729156925624312011-01-16T08:18:00.000-08:002011-01-16T08:41:48.782-08:00Critique of the Idea of Artificial Neuron Replacement<span style="font-style:italic;">There is a popular idea that has been around for a while now, the concept that we can augment our biological brain with nanomachines that take the form and function of artificial neurons. </span> <br /><a href="http://www.youtube.com/watch?v=R-2Xw-GNkUQ" target="new"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 224px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjX2JvclGuhQ_rkV0tgTP5GWGUSQfHTyXaxAkx-nE4mX93-grGg-64j_CkDBLfx2c9CLXI447eg6j9RxTz0TQPSY9NXKVb6lF0qk0LOcgI8Ad6WCr_mP9_qs4U9oI-ZIucWa6L3_8TxIaM/s400/Artificial-Neuron.jpg" alt="Artificial Neuron Replacing a Biological Neuron" id="BLOGGER_PHOTO_ID_5562823208738629954" border="0" /></a><br /><br />Let's examine this idea in detail, see how it holds up. The following summarize well this concept:<br /><br /><span style="font-style: italic;">"drastically alter our selves, by making purposeful changes to the way we operate, combining our selves with engineered systems (including decision-systems), and converting our selves to superintelligent agents."</span><br /><br /><span style="font-style: italic;">"Using nanomachines to gradually replace the organic brain cells with synthetic neurons will probably be the first step in truly bridging the gap between man and machine (think Ship of Theseus analogy)."</span><br /><br />I have been considering this. Although I appreciate how this route may seem to resolve certain issues such as the conscious identities paradox, there are certain extremely serious problems here.<br /><br />I will set aside for the moment the flawed notion that digital computers are “superintelligent”, whilst collective computers are somehow an inferior form of computational architecture. I have explained elsewhere why this is not the case, and will suffice to say for now that collective computers are not only inherently massively parallel but also massively interconnected, while digital computers are inherently neither. Digital computers are inherently sequential – they can through arduous effort be made to have some (usually small) degree of parallelism, but they in no wise have anything that reflects massive interconnectedness.<br /><br />However, that is not the ground I want to cover here. In this post I want to examine in detail the idea that miniaturized electromechanical nanobots would make good piecemeal replacements for biological neurons, that we can dust into our domes and have them slowly replace (or augment) said neurons into a resulting synthetic brain of some kind.<br /><br />Let’s go back to our newly gained understanding of the pieces of collective computation, and how they are evinced in our own brains. As we now understand, there are two main pieces here. First, the topology of the network, ie how each neuron with its thousands of inputs and outputs is connected to every other. Second, the resistance at each synaptic connection point, which is not a static value in our own brains, but a complex variable function that we have only begun to understand.<br />These components both individually and together affect the computational result of the collective computer – they are both very important.<br /><br />Now, let’s introduce one of our hypothetical artificial neurons into the mix. Let’s assume a realistic scenario, where we dust these into someone’s brain and they must adapt to the local conditions of the neuron they end up replacing. In other words, we are not designing specific artificial neurons for specific biological neurons in someone’s individual brain. What must one of these artificial neurons be able to do?<br />Well, several things, all of them extremely challenging. Once it identifies a neuron that it is going to replace, as a first step it must match its topology exactly, if it is to replace this neuron. That means it must determine how many tendrils it has, and deploy up to 10,000 of these long, thin tendrils itself. Bear in mind that the number of neuronal connections varies tremendously, but 1,000 to 10,000 should cover most cases, to the best of our understanding. And the termini of these “arms” are extraordinarily tiny – the animations showing this replacement, where a big fat artificial neuron replaces a big fat biological neuron are laughably incomplete to capture the complexity of the physical dimension to this problem.<br /><br />Turns out, that’s probably the easy part.<br /><br />The next challenge will be to seamlessly replace all of the synaptic connections of our soon-to-be-replaced biological neuron with the artificial synapses of our synthetic neuron – and each of these are interfacing with biological neurons, that have not of course been replaced yet. Bear in mind that how they convey signals is entirely different from what our artificial neuron is likely to be able to do – they are chemical processes, not electronic in the sense we know from our digital technology – complex biochemical neurotransmitters that must be released and/or taken up with incredible precision in order to accurately match the effective neuronal resistance of that synapse in a working brain. The ability to adjust the synaptic resistance with exquisite precision is what gives our bio brain’s collective computer its “programmability” – the collective computer equivalent of what we call software in the digital realm.<br /><br />Therefore, your artificial neuron must have so much biological capability that it is really hard to imagine what it would bring to the table in terms of simply its being artificial.<br /><br />Now, assume it can do all of these things – which I do not, but I understand some of you might. What specifically does this newly hooked up artificial neuron bring to the brain that somehow conveys anything special above what our normal neuron could? It can’t fire faster, because it is interfacing with biological neurons that can’t take this. And it is very important to understand that “clock speed” for a collective computer is not nearly as important as clock speed is for a digital computer – precisely because a collective computer can do in 5 or 6 clock cycles what would require a digital computer millions, billions, or even more cycles (it depends on the computational task) to achieve.<br /><br />Not trying to be negative, not saying all this is impossible or anything. But it’s fun to occasionally think these things through.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com5tag:blogger.com,1999:blog-3121909089397896391.post-15292336823456622042010-10-11T10:52:00.001-07:002010-10-11T11:07:23.058-07:00Introduction to Neural Networks<img style="display:block; margin:0px auto 10px; text-align:center;width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsnD0Y_vcSo8xBbzsno0-_LawGB3Zqu_g5sXH_2_bd4t7ikpDx1kfcFAlgWXp1hFzq4e-_GmR5ZHjcyrxWH2NHXeHGBeG1UE6XJpSRpfBGcbqxLQpunbGWQ8A6gCHI7NxEx7Q9skL0c-E/s400/Introduction+to+Neural+Networks.jpg" border="0" alt="Introduction to Neural Networks" id="BLOGGER_PHOTO_ID_5526851546451025730" /><br /><br />There is considerable misunderstanding with regards to neural networks, how they compute, how they store memories, and so on. Here are some clear and concise explanations of 3 topics that provide a solid foundation:<br /><br />1. The nature of <a href="http://predictionboy.blogspot.com/2010/10/collective-computation.html">collective computation</a> (the form of computing that takes place in neural networks), and how it differs from digital computation.<br /><br />2. How neural nets store and retrieve data via <a href="http://predictionboy.blogspot.com/2010/10/associative-memory.html">associative memory</a>, and how this differs from digital computers.<br /><br />3. The correct <a href="http://predictionboy.blogspot.com/2010/10/electronic-neurons.html">electronic analogue for a biological neuron</a>. In other words, when we think of what a neuron does, what we should be thinking of translated to the world of electronics.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-74386889660504524992010-10-11T10:46:00.001-07:002010-10-13T11:01:18.670-07:00Your Brain as a Neural Network<img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 266px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjN00RhlSsxmIJx8DQaUf62sZJo5GE7oz2XUuznsIFPogT67bIH_hjZgk0VmyStvRXzTbNXHrjUG5Gf13vZEgmBWN8JPwo0-H_1blYMVEptYO9cW7d1F69imhQow37WZz9V8yQr2zDs-3Y/s400/Your+Brain+as+Neural+Network.jpg" alt="Your Brain as Neural Network" id="BLOGGER_PHOTO_ID_5526851865507758834" border="0" /><br />Your brain is a neural network.<br /><br />One statement that follows immediately from the fact that the brain is a neural network is that every computation performed by the brain is a <a href="http://predictionboy.blogspot.com/2010/10/collective-computation.html">collective computation</a>, not some other kind of computation. Not a digital computation, not a quantum computation, not some other flavor of computation - collective computation and collective computation exclusively. Even thought processes that seem like they would be digital, such as adding 1 and 1, are in our brains performed via collective computation and <a href="http://predictionboy.blogspot.com/2010/10/associative-memory.html">associative memory</a>.<br /><br />An adult human brain has about 100 billion neurons. The number of synaptic connections is not precisely known, but estimates vary from 100 trillion to 500 trillion. Not just what neurons are connected to which others (the “topology” of the neural net), but also the strengths, the “resistance” (in electrical analogue terms), of each of those synaptic connections are also important to the collective computation process. If you vary the connections, you potentially get a different computed result. If you vary the connection strengths, you potentially get a different computed result. If you vary both, you potentially get a still different computed result.<br /><br />The various neural nets and sub-nets in your brain perform an incredible diversity of processing tasks. They process sensory information; they control the movements of your physical body (your nervous system should be thought of as an extension of your brain’s neural network); they recognize the letter “e”; they perform feats of imagination; and countless other tasks.<br /><br />The net of nets that is your brain does all these things, and many more, and it does it all exclusively through collective computation and associative memory.<br /><br />As we learn to develop artificial neural networks that perform ever more complex processing tasks, there is another aspect to our biological brains that we have barely begun to tackle.<br /><br />That is, that our brain is not just a bunch of neural nets, each doing its own processing functions; these nets are tightly interwoven into a highly synchronized, integrated processing entity. This means that there are neural networks that control other neural networks, which in turn control still other neural networks, and so on. The output of all these collective computations then flow back up the chain of neural nets to execute the “output” of whatever computation is being considered – say, swatting a fly, writing the letter “e”, or simply staying in the brain as a thought. The complexity of designing a neural network to perform even one specialized computational task of nontrivial complexity can quickly become daunting; designing a hierarchical system of nested, hierarchical neural nets that can perform a range of different computational tasks under one hood (so to speak) is far greater still. Our ability to design and build artificial neural nets that control other neural nets in complex yet stable and consistent ways is in its earliest infancy.<br /><br />Because the outside world is an extremely information-rich and ever-changing place, it is imperative that our brains be able to constantly process, accommodate, and successfully adapt to new information. There are at least two mechanisms to achieve this.<br /><br />One is adding new neurons. Very recently it has been discovered from detailed experiments with rats that their brains grow about 10,000 new neurons a day, throughout their lives. A further, truly fascinating result is that if the animal is not learning something new, many of these neurons don’t stick around and die off quickly. If the animal is learning something new (the experiments concerned mazes and such, various standard rat intelligence training exercises), that knowledge is incorporated by the wiring up of these new neurons with existing neurons to form a permanent addition to the rat’s brain.<br /><br />This is a process that is common to at least all mammals (and probably most others creatures with brains as well to some extent) – including of course, humans. Especially humans, since our brains are our main evolutionary survival strategy. The exact rate of neuron production in humans is not known exactly, since that involves direct physical examination of the brain (many rats died to bring us this knowledge, in other words) – but I would venture that a reasonable and perhaps even conservative estimate for the new neurons in humans might be something like 10 times the rat number – say, 100,000 neurons a day.<br /><br />An interesting additional insight from these studies was that some factors can accelerate this natural rate of neuron production, such as exercise and certain foods. In addition, other agents such as alcohol were shown to inhibit this natural rate of neuron production – so drinking and drugging does reduce your ability to acquire new knowledge, at least to some extent.<br /><br />Once incorporated into the brain, this neuronal topology of connections seems to be more or less fixed.<br /><br />In addition to the neuron connection topology, there is the other dimension of neural network flexibility that must be considered – that is, the connection strengths of these myriad connections. If you vary the connection strengths, you potentially alter the computed result – the stable, self-consistent “answer” or response output of the neural net.<br /><br />While the physical connection topology of the brain may be more or less fixed (with the exception of the new neurons coming on board every day), this is not the case with the connection strengths. These vary widely via the passing of ions across the synaptic channel, which can produce the chemical equivalent of changing the resistance (or voltage drop) in an analogous electronics circuit. This is happening on a grand scale in our brains, where the resistance profiles vary based on the inputs produce a circuit that converges on a self-consistent stable state. This could allow for very efficient neural nets that can, say, recognize many human faces with far fewer neurons than if each face had a dedicated neural network with its own, dedicated neurons.<br /><br />I don’t want to get too detailed in terms of how a biological brain achieves this, but here is a very brief description. Synapses exhibit a behavior called "spike timing plastic dependency", which is thought to be the possible basis for memory and learning in human and other mammalian brains. The synaptic connection between neurons becomes stronger or weaker, as the time gap between when they are stimulated becomes shorter or longer.<br /><br />These characteristics – adding new neurons and adjusting the strength of the connections between existing connections – endow the neural network in our head with programmability – the ability to accommodate new information and behaviors, where the brain’s control centers and the inputs from the external world can modify the brain’s “computed result” (which is of course many computed results for even simple tasks) over time.<br /><br />The human brain is the most complex single object in the universe that we know of (with apologies to any more advanced alien brains that may be out there). The brain is an expensive organ, using something like 30% of the body’s energy budget, but only 7% of its mass.<br /><br />The complexity of the brain’s processes translates into two main factors – the neural network itself, keeping all those neurons and synapses alive and well, and the “software” to run it, the mechanisms for bringing in new neurons and changing the connection strengths of the existing ones. Without these mechanisms, our brains, no matter how large and complex, would be brittle and limited once they were filled with knowledge (which given the sensory richness of the world would not take long).Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com1tag:blogger.com,1999:blog-3121909089397896391.post-17400668745454895162010-10-06T11:06:00.000-07:002010-10-06T11:09:05.762-07:00Computational Energy SurfaceEssentially, a computational energy surface is an n-dimensional surface that represents the voltage output for any combination of input voltages. A well-designed neural network will have valleys, or self-consistent stable points, at the places that correspond to particular “answers” for a given set of input voltages. The shape of this computational energy surface is determined by the connections between the amplifiers, the strengths (resistance) of each of those connections, and any external currents applied to the amplifiers.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-14039518688857873822010-10-01T10:36:00.001-07:002010-10-06T11:19:51.630-07:00Electronic Neurons<span style="font-style:italic;">A biological neuron is an analog computing entity that is modeled in artificial neural networks by another analog component known as an operational amplifier, or saturable amplifier</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 400px; height: 255px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeW6WkWZDap3x2ii4joHhlFuovwHb-TF3tsyCaJ5JXy-2K2Aw2ejAVVODY35_gy8bOXgVTUnuAB_pahADE0sH7dXAubY8JkDyfbK4YTbSoJvgQ_DVg0DU6zJbinUJ24HyD0iDzJXVrPe8/s400/Electronic+neuron+model.gif" alt="Electronic model of a neuron" id="BLOGGER_PHOTO_ID_5523537762347676274" border="0" /><br />Neurons are complex, but even a highly simplified model of a neuron when connected with others in an appropriate network, can perform powerful computations. A biological neuron receives information from as many as thousands of other neurons through synaptic connections and passes on signals to as thousands of other neurons. The synapse, or connection between neurons, mediates the “strength” with which a signal crosses from one neuron to another. Artificial “neural” circuits have been built from simple electronic components: operational amplifiers replace the neurons, and wires, resistors and capacitors replace the synaptic connections. The output voltage of the amplifier represents the activity of the model neuron, and currents through the wires and resistors represent the flow of information in the network.<br /><br />Both the simplified biological model and the artificial network share a common mathematical formulation as a dynamic system - a system of many interacting parts whose state evolves continuously with time. The manner in which a dynamic system evolves depends on the form of the interactions. In any neural network the interactions result from the effects one “neuron” has on another by virtue of the connection between them. Thus it is not surprising that the behavior of the neural circuits depends critically on the details of the connections, and the strengths of each.<br /><br />The computational behavior exhibited by neural networks is a collective property that results from having many computing elements act on one another in a richly interconnected system. The collective properties can be studied using simplified model neurons based on operational amplifiers, resistors, and capacitors.<br /><br />Further reading:<br /><a href="http://en.wikipedia.org/wiki/Neural_network" target="new">Neural Network</a><br /><a href="http://www.scholarpedia.org/article/Silicon_neurons" target="new">Silicon neurons</a><br /><a href="http://en.wikipedia.org/wiki/Biological_neural_network" target="new">Biological neural network</a>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-63054478864169727672010-10-01T10:32:00.000-07:002010-10-06T11:06:16.315-07:00Associative Memory<span style="font-style: italic;">Neural networks store and retrieve memories and data via associative memory. What is associative memory, and how is it different from digital computer data access?</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 293px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyHK-RHIOpEoBWZyC_eTJY_jzIi9DsFb58WAWBkF5FdUDrgRX49HawTYCvqNJuh6S5cjy3GesnCUYkRfEwhUXANQ0JCp3ko_7VUj9Gvs56Xyae41mg-I42w_lQiG6bHl01_xHgKzDeiSo/s400/memristor+neural+net.jpg" alt="Associative Memory neural net" id="BLOGGER_PHOTO_ID_5524155620106594866" border="0" /><br />Associative memory is a concept that originated from the field of psychology, and in that original sense means recalling a previously experienced item by thinking of something that is linked with it, thus invoking the association. Associative memory or associative storage is a data-storage technique by which a location is identified by its informational content rather than by names, addresses, or relative positions, and from which the data may be retrieved. As with many areas where neural networks are highly efficient, associative memory is yet another form of optimization problem, in this case finding the best (optimal) match given partial information. <br /><br />An associative memory is fundamentally different from a digital computer memory. A conventional computer stores information by assigning addresses, which identify the physical locations where the data will be stored in hardware, such as a sector or track on a hard drive. When the central processor requires a piece of data, it issues an instruction to read the data at a particular address. The address itself contains no information about the nature of the data stored there.<br /><br />Now reflect for a moment about your own memories. If you think of a particular friend, you will remember many facts – name, age, hair color, height, job, hobbies, schooling, family, house, shared experiences and so on. These facts are somehow combined to form the memory of the individual. There is no notion of storage address in the way you retrieve such information from your memory. Instead pieces of the information itself are used in place of an address.<br /><br />Fruit flies and garden slugs have associative memories. The fact that such relatively simple nervous systems display the phenomenon suggests that it must be a natural, almost spontaneous property of biological neural networks. Mathematical models of associative memory were developed in the 1970s. The concept of the computational energy surface provides a means to understand and study associative-memory circuits built of saturable amplifiers.<br /><br />How would one make a collective-decision circuit behave like an associative memory? Think of a space of many Cartesian coordinates in which each axis is labeled with some attribute a person might have. One axis might refer to height, one to sailing experience, one to the first name of the individual, one to city of residence and so on. Any point in the space describes the characteristics of a hypothetical possible individual. Each of your friends is represented by a particular point in the space. Because you have very few friends compared with the set of all possible individuals, if you put a mark at the position of each of the people you know, you will have marked a very few points in a large space. When someone gives you partial information about a person – for example, color of hair and weight but not name – this describes an approximate location in the space of possible people. The idea of an associative memory is to find the friend who best matches the partial data.<br /><br />A collective-decision circuit such as the one described for the task-assignment problem could perform as an associative memory if the computational energy surface can be shaped to have valleys, or stable points, at the places that correspond to particular memories. A pattern of input voltages corresponding to a partial memory would be supplied to the amplifiers and the circuit would then follow a trajectory to the bottom of a local valley in the computational energy terrain and read out the output state of the amplifiers as the stored memory. Unlike the task-assignment circuit, in which the connections are highly regular because of the simple global rules that constrain the problem, in an associative memory the connections are irregular and the stable points are scattered somewhat at random because the memories need not have any particular relationship among themselves. Therefore, to construct an associative memory one must find connections between amplifiers such that the many desired memories are represented simultaneously by the circuit’s stable states.<br /><br />Further Reading:<br /><a href="http://richardbowles.tripod.com/neural/hopfield/assoc.htm" target="new">An Associative Memory</a>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-64699398418836975252010-10-01T10:29:00.000-07:002010-10-06T11:11:40.525-07:00Collective Computation<span style="font-style: italic;">Like the biological brains that inspired them, neural networks process information in a manner that is both massively parallel AND massively interconnected. What does this mean, and how does this compare to digital computation?</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 350px; height: 266px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnqegUyFVeIhiMQPsTF6-FFuOsYqQkPhgxXHM6GXkUMz2GuweA1hRqbdFyLYmeg4x_eAoT2hjtQENWprVNXxHY4gMfWjsqgCjcxjkSQLwEnx3fdKItvE4L4Qy_ANg0lDq_rwiOZqh-UnM/s400/Collective+Computation.jpg" alt="Collective Computation in Neural Networks" id="BLOGGER_PHOTO_ID_5524526775609571042" border="0" /><br />The computational behavior of neural networks is a collective property that results from having many computing elements act on one another in a richly interconnected system. The collective properties have been studied using simplified model neurons for over a quarter century.<br /><br />To understand how collective circuits work, we must take a wide view of computation. Any computing entity, whether it is a digital or analog device or a collection of nerve cells, begins with an initial state and moves through a series of changes to arrive at a state that corresponds to an “answer”. The process can be visualized as a path, from beginning state to answer, through the physical “configuration space” of the computer as it changes over time.<br /><br />For a digital computer, this configuration space is defined by the set of voltages for its devices (usually transistors). The input data and program provide starting values for these voltage settings, which change as the computation proceeds and eventually reach a final configuration, which is reported to an output device such as a screen or printer.<br /><br />For any computer, there are two questions of central importance: How does it determine the overall path through its configuration space? And how does it restore itself to the correct path when physical fluctuations and other “noise” cause the computation to drift off course? In a digital computer the path is broken down into logical steps that are embodied in the computer’s program. In addition, each computing unit protects against voltage fluctuations by treating a range of voltages, rather than just the exact voltage, as being equal to a nominal value; for example, signals between 0.8 volt and 1.2 volts can all be restored to 1.0 volt after each logical step in the computation.<br /><br />In collective-decision circuits the process of computation is very different. Collective computation is an analog process, not a digital process. The overall progress of the computation is determined not by step-by-step instructions but by the rich structure of connections among computing devices. Instead of advancing and then restoring the computational path at discrete intervals, the circuit channels or focuses it in one continuous process. These two styles of computation are rather like two different approaches by which a committee makes decisions.<br /><br />In other words, in a neural network the software <span style="font-style: italic;">is </span>the structure - both the pattern of interconnections and the strength of each connection are of importance to the computed result. Each of what could be many, many inputs help determine the output of the neural node. This is very similar to how an operational amplifier works in the world of electronics. Therefore, artificial neural networks are comprised of connected operational amplifiers.<br /><br />A neural network can be best thought of as a <a href="http://predictionboy.blogspot.com/2010/10/computational-energy-surface.html">computational energy surface</a>. <br /><br />Collective computation is well suited to problems that involve global interactions between different parts of the problem. The nature of many of the problems that neural networks excel at can be described as “optimization problems”, such as the <a href="http://www.gene-expression-programming.com/GepBook/Chapter6/Section3/SS2.htm" target="new">task assignment problem</a>.<br /><br />Perception can also be expressed as an optimization, in that our interpretation of sensory information is constrained by what we already know. Our senses constantly gather great quantities of information about the external world, which is often imprecise and noisy. The edge of an object might be hidden behind another object, for example. However, we know that the edges of objects are continuous, and simply because we can’t see an edge doesn’t make us wonder whether the object has changed its shape. Our interpretation of the information is constrained by what we already know.<br /><br />This knowledge can often be represented as a set of constraints, similar to those in a task assignment problem, and expressed in a computational energy function.<br />The perceptual problem then becomes equivalent to finding the deepest valley in the computational energy surface. For example, problems in computer vision can be expressed as an optimization problem and solved by a collective decision circuit in which knowledge of the real world has been imposed as a set of constraints. This approach can be used to take incomplete depth information of a 3-D world and reconstitute missing information such as the location of the edges of objects.<br /><br />One of the outstanding features of neural networks is that they converge on a good solution rapidly, usually in a few multiples of the response time of the computing devices – often less than a microsecond, for a problem that a digital computer implementing even the most efficient algorithm would require millions of cycles for. <br /><br />Here we can begin to discern the phenomenal power of neural networks, which again is a result of both their massive parallelism and massive interconnectedness.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-13107938507515346922010-10-01T10:27:00.000-07:002011-01-16T08:32:15.965-08:00Neural Networks<img style="display: block; margin: 0px auto 10px; text-align: center; width: 400px; height: 251px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivXPtp-rwsY0KM6BZtSGdDnMif51bVGPuNmfoV4GnBjVapLJvV5tPJ_kYFMkXj2ZVkuNgrKw5KLx5A2VqV-dDMJ24fxzxgocnjIdADDs62KAiwp95u4SIm6QNq1OlvBOsPZZ_Xmng69po/s400/neural+networks.jpg" alt="Neural Networks" id="BLOGGER_PHOTO_ID_5526850708057029346" border="0" /><br /><br />In terms of achieving advanced AI, neural networks have a critical role to play, both on their own and in combination with digital computers into a seamless computing system.<br /><br />Despite the fact that our brains are neural networks, that we have had a theoretical framework for understanding neural computation since the 1940s, and we have been building simple neural networks since at least the late 1980s, the collective computation and associative memory that powers neural networks are still not well understood by many.<br /><br /><a href="http://predictionboy.blogspot.com/2010/10/introduction-to-neural-networks.html">Introduction to Neural Networks</a><br /><span style="font-style: italic;">Definition of collective computation, associative memory, and identification of the correct electronic analogue for a biological neuron</span><br /><br /><a href="http://predictionboy.blogspot.com/2010/10/your-brain-as-neural-network.html">Your Brain as Neural Network</a><br /><span style="font-style: italic;">The human brain is a magnificent - and magnificently complex - engine of collective computation.</span><br /><br /><a href="http://predictionboy.blogspot.com/2011/01/neuron-replacement.html">Critique of the Idea of Gradual Replacement of Biological Neurons with Artificial Neurons</a><br /><span style="font-style: italic;">How realistic is the idea that one or millions of artificial neurons could be implanted into someone's brain, hook themselves up, and communicate successfully with biological neurons? Would this actually lead to a "more intelligent" brain anyway?</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-42295676832707130142009-04-19T08:21:00.000-07:002009-04-20T04:40:20.978-07:00Paths to Advanced AI - Engineered or Brute-Force Brain Simulation?<img style="display:block; margin:0px auto 10px; text-align:center;width: 200px; height: 184px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3zv_B3PekiF8ZxJClMW4lm6XFKjvxQjxfp2JPaEN4WDfJ16b6UPhmhiTbFdNgAvQbcO-NWwJHAJv2x8eFO-VQLiReKI5wHjAknCNbAlW1caCpordV3yZIQ7XC4H07IybRer3VTYfHmdc/s400/brain1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5326736914184674530" /><br />This is an interesting site, a few years old, but <a href="http://www.mattbamberger.com/content/WhatIBelieve" target="new">Matt Bamberger</a> makes some good comments on the topic of advanced AI, and the various paths thereto.<br /><br />These are excerpts from this page:<br /><span style="font-style:italic;"><span style="font-weight:bold;">"We will soon develop human-equivalent AI, either by brute-force simulation of the human brain, or by a more traditional engineered approach.<br /><br />By human-equivalent AI, I mean an AI with cognitive abilities at least equal to those of a human being. It isn't necessary for an AI to be exactly (or even moderately) like a human being. I agree with <a href="http://yudkowsky.net/" target="new">Eliezer Yudkowsky</a> and others who have argued that a human-like AI would be profoundly dangerous.<br /><br />Human-like AIs are dangerous for two reasons. Firstly, they will tend to exhibit dangerous human traits such as selfishness and fear as well as benign ones. Secondly, the inner workings of a human-like AI will probably be relatively opaque, just as the workings of an actual human brain are opaque. This makes it much harder to monitor and evaluate an AI, both to prevent it from exhibiting malicious behavior and to detect any serious malfunctions such as the development of aberrant goals."</span><br /></span><br /><br />I like his wording here, "AI with cognitive abilities at least equal to those of a human being." That is what is critical to advanced AI, both to be useful, and to be safe. In other words, it doesn't have to be exactly like a human brain. <br /><br />These are some excerpts from his page on <a href="http://www.mattbamberger.com/content/BrainSimulation" target="new">Brain Simulation</a>: <br /><span style="font-style:italic;"><span style="font-weight:bold;">"There are two basic approaches to building an AI. The traditional approach is to write an artificial intelligence from scratch. A less sophisticated but perhaps more tractable approach is to simply simulate the workings of an actual human brain. For a variety of reasons that I'll detail below, I don't think this is the best approach, but I think it's an excellent fallback. If traditional approaches fail to deliver a working AI in a timely fashion (which they very well may), there's an excellent chance that brain simulation will be able to deliver a good enough AI within the next few decades."</span></span><br /><br />I do not concur with Matt that brute-force human brain simulation is "an excellent fallback" if we can't figure out how to do engineered AI. However, his outlining of the complexities and risks of the brute-force approach is quite lucid.<br /><br />I have said on numerous occasions that Kurzweil's estimate of 2045 for the Singularity is probably not a bad one, certainly much more reasonable than much nearer term estimates like the next few years. I tend to think of "The Singularity" simply as the point when we have an AI of roughly equivalent cognitive capability of a human brain. I have also said that this estimate reflects his appreciation of the difficulties of engineering the AI software, since really, the raw computing power necessary is probably already here, or very close on the horizon (and the architectural innovations like in the "Googling the Brain" video will make the hardware side of the equation even more tractable, perhaps).<br /><br />However, I have perhaps misinterpreted Kurzweil to some degree. What he seems to be saying is that engineered AI might be too tough, and favors the "brute force AI," based on the exact copy of a human brain, described above. I am unsure to what degree he actually believes this is the only feasible approach, or if perhaps he is bringing some personal preferences to this, such as having our advanced AI be "spiritual", thinking of advanced AI as a "successor to humanity", things like that.<br /><br />My perspective is that engineered AI, no matter how difficult it may be or how long it may take, is the only viable approach. I think of advanced AI as a partner to the intellectual pursuits of humanity, not a replacement, or successor, to humanity. This notion is supported by the entire history of technology - and though advanced AI will be special, it will still be a technology product - at least if it is synthetic rather than biological, which I suggest it will be, for many reasons.<br /><br />However, this comment is interesting:<br /><span style="font-style:italic;"><span style="font-weight:bold;">"It's hard to extend the core capabilities of a simulated brain. One of the advantages of the simulation approach is that it requires very little deep understanding of how human intelligence actually works. The downside of that is that it makes it very hard to improve the AI. Even fairly simple tasks like increasing the AI's memory or directly transferring knowledge or skills from one AI to another become very hard problems. Virtual brains will be super-fast, super-capable people, but they're unlikely to be genuinely superhuman."</span></span><br /><br />This seems like an eminently reasonable comment. A brute-force AI may be far less capable of undertaking a runaway loop of ever increasing intelligence, which of course the Singularity is predicated on. It is therefore unclear to me why Kurzweil seems to favor this approach in the first place. With engineered AI, there are definitely clear paths to increasing the intelligence of that. My favored concepts are hyper-observancy, super-subtlety, and ultra-coordination, which could consume almost unlimited multiples of human intelligence in ways that provide tremendous value, while remaining entirely rational, predictable, and safe. However, there are undoubtedly many other ways to identify measures of "intelligence" that could be improved in clearly value-adding ways. The main point here is that the AI will almost certainly need to be engineered rather than brute-forced in order to identify, manage, and increase those.<br /><br />This is the biggest challenge I have with Kurzweil's vision: he simply uses Vinge's definition of advanced AI intelligence as "being able to do anything a human can do," which seems to reinforce his notion that brute-force AI is best, I'm not entirely sure. However, there are many things a human can do, and more to the point, many parts of the human brain and its psychological components that there is no need for an advanced AI to have, and would make it exceedingly dangerous if it did. This contention of having to "bake in the dangerous" parts of the human brain as the only path to advanced AI is, I suggest, the main and probably only reason that the Singularity is "unknowable." An engineered AI that is purely rationally controlled, as an engineered AI would almost certainly be, renders the post-Singularity "knowable."<br /><br />Another comment from Matt's site: <br /><span style="font-style:italic;"><span style="font-weight:bold;">"Simulated people are by definition just like people. That means they're cranky and sneaky and prone to behaving badly. Those are bad properties in an AI."</span></span><br /><br />In other words, using Freudian terminology, they would have an id, superego, and ego. If you don't like Freud, use animal passion, morality, and reason, or whatever parlance you like. These three components, the tripartite nature of the human mind, are universally acknowledged by all of the great thinkers of history, although parlance and emphasis varies widely. For example, in contrast to Freud, Plato argued that reason could, in principle, rule the passions. However, he acknowledged that this usually didn't occur in real life - hence the idealism of his view. The most dangerous of these, of course, is the id. An advanced AI with an id would be unpredictable, because our animal passions are often irrational. This would, I suggest, make it a poor product to introduce into the marketplace, and if it was, would probably result in the manufacturer getting their pants sued off the first time once of these devices knocked someone through the wall because they somehow "offended" the device, or whatever.<br /><br />A brute-force AI, even if we or they could figure out how to "increase" their intelligence, could easily become more dangerous or unpredictable the more intelligent they became, a very unfavorable trend for a manufactured product.<br /><br />A brute-force AI would also "bake in" parts of the human brain that are really only suitable for a biological organism. For instance, the sex drive. To bake that into a manufactured product seems not only dangerous, but actually cruel to the advanced AI itself. Simulated sex drive, cool - but you need engineered AI for that.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com2tag:blogger.com,1999:blog-3121909089397896391.post-81312413220769853102009-04-16T05:58:00.000-07:002009-04-16T06:33:02.067-07:00Intelligent Design<img style="display:block; margin:0px auto 10px; text-align:center;width: 342px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMxd4iVpqk-P93B2ycbXBScH6QOQf5YthWPxWS8Ara0OPp43siX3NR8VBo6IauW1MmNsZSQUuLbZRa4o2E-DB9_Sm6c8G2WeGSoafkziUbJri8OWvARz4RZj0TF7GPJgS88FSU7J607x0/s400/intelligentdesign.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5325281403099279842" /><br />What is <a href="http://en.wikipedia.org/wiki/Intelligent_design" target="new">Intelligent Design</a>?<br /><br /><span style="font-style:italic;">Intelligent design is the assertion that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." It is a modern form of the traditional teleological argument for the existence of God that avoids specifying the nature or identity of the designer. The idea was developed by a group of American creationists who reformulated their argument in the creation-evolution controversy to circumvent court rulings that prohibit the teaching of creationism as science.<br /><br />Intelligent design's leading proponents, all of whom are associated with the Discovery Institute, a politically conservative think tank, believe the designer to be the God of Christianity.<br />Advocates of intelligent design argue that it is a scientific theory, and seek to fundamentally redefine science to accept supernatural explanations. The consensus in the scientific community is that intelligent design is not science. The U.S. National Academy of Sciences has stated that "creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science." The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience. Others in the scientific community have concurred, and some have called it junk science.</span><br /><br />Therefore, Intelligent Design is not some augment to evolution - it is a competitive "theory" to evolution.<br /><br />I concur that natural evolution is compatible with the existence of a higher intelligence. However, there is nothing to suggest that such a higher intelligence is at all required for natural evolution to operate - and therefore, suggesting a higher intelligence violates Occam's Razor in the worst possible way, because it distracts from the valid unraveling of the truly profound intricacies of evolution by introducing what at heart is a metaphysical desire for God.<br /><br />When we look to the heavens for the answer to what is right before us, all around us, it is not the mystery of evolution that we are trying to solve - it is attempting to feed our deep human need for God. A person's faith is their own, I do not attack that. It can be a source of great inner strength. But do not look to the physical world to bolster those beliefs - if one must do that, it could be suggested that one's faith is fragile. If it provides comfort, consider God the "ultimate scientist" - a rational entity who initiated a universe that follows physical laws, and those laws evince themselves in myriad ways, and some of those ways can be described as various natural processes.<br /><br />Evolution is a process. That's all it is - like the weather, plate tectonics, star formation, galaxy formation, and many other natural processes we could name. A most amazing process, but just a process.<br /><br />And suggesting that aliens might have done it, rather than a Christian or other God, there is no difference between these two. If you ponder it, a sufficiently advanced alien race would be indistinguishable from God to us, if that is what they wanted to be. If we make the equally Occam's Razor-violating assumption that "aliens did it", we once again raise our eyes to the stars for the answers to what is all around us.<br /><br />Natural selection is a most powerful idea - not just because it has been, and continues to be, validated by an ocean of empirical evidence, but because it is simple to grasp. Those who attempt to obfuscate the purity of this idea by introducing metaphysical perspectives do the pursuit of objective truth a grave injustice. Their methods in fact predate the scientific revolution - they use rhetoric and disputation until one becomes convinced of their view. This is not science - this is what the scientific method was specifically designed to correct. Rhetoric and disputation do not lead to the truth, if that rhetorically-derived "truth" runs counter to the book of nature - in other words, if it runs counter to the empirical evidence.<br /><br />Interestingly, the breakthroughs in genetics and the (much touted here) advances in DNA sequencing and such are helping to unravel how evolution works in greater detail than ever before. These technologies are bolstering the already century and a half of empirical validation of this theory with even more, staggeringly vaster confirmation of this ground-breaking idea.<br /><br />Sometimes it is suggested that when empirical evidence runs counter to one's deep certainty on some issue, that the empirical evidence must lose. This is the hardest thing about embracing science - to be objective, to let the evidence form one's assessments, not bending the facts to suit one's predetermined beliefs. This is why science, despite hundreds of years of performing incredible feats of knowledge generation and technological progress, still has but a precarious foothold in the minds of most.<br /><br />Here are a series of articles discussing many of the various aspects of current evolutionary thinking.<br />http://www.sciam.com/sciammag/?contents=2009-01<br /><br />Of particular interest might be these:<br /><br />Testing Natural Selection with Genetics<br />Biologists working with the most sophisticated genetic tools are demonstrating that natural selection plays a greater role in the evolution of genes than even most evolutionists had thought<br />http://www.sciam.com/article.cfm?id=testing-natura l-selection<br />Key Concepts:<br /><br /><br />* Charles Darwin’s theory that evolution is driven by natural selection—by inherited changes that enhance survival—struggled against competing theories for the acceptance it has within biology today.<br />* Random genetic mutations having neither positive nor negative effects were once thought to drive most changes at the molecular level. But recent experiments show that natural selection of beneficial genetic mutations is quite common.<br />* Studies in plant genetics show that changes in a single gene sometimes have a large effect on adaptive differences between species.<br /><br />This is a brief excerpt from the above, focusing on what is meant by natural selection and "fitness":<br /><br />the idea of natural selection is simplicity itself. Some kinds of organisms survive better in certain conditions than others do; such organisms leave more progeny and so become more common with time. The environment thus “selects” those organisms best adapted to present conditions. If environmental conditions change, organisms that happen to possess the most adaptive characteristics for those new conditions will come to predominate. Darwinism was revolutionary not because it made arcane claims about biology but because it suggested that nature’s underlying logic might be surprisingly simple.<br /><br />In spite of this simplicity, the theory of natural selection has suffered a long and tortuous history. Darwin’s claim that species evolve was rapidly accepted by biologists, but his separate claim that natural selection drives most of the change was not. Indeed, natural selection was not accepted as a key evolutionary force until well into the 20th century.<br /><br />The status of natural selection is now secure, reflecting decades of detailed empirical work. But the study of natural selection is by no means complete. Rather—partly because new experimental techniques have been developed and partly because the genetic mechanisms underlying natural selection are now the subject of meticulous empirical analysis—the study of natural selection is a more active area of biology than it was even two decades ago. Much of the recent experimental work on natural selection has focused on three goals: determining how common it is, identifying the precise genetic changes that give rise to the adaptations produced by natural selection, and assessing just how big a role natural selection plays in a key problem of evolutionary biology—the origin of new species.<br />...<br />“Fitness,” as used in evolutionary biology, is a technical term for this idea: it is the probability of surviving or reproducing in a given environment.<br />...<br />Adaptive evolution is therefore a two-step process, with a strict division of labor between mutation and selection. In each generation, mutation brings new genetic variants into populations. Natural selection then screens them: the rigors of the environment reduce the frequency of “bad” (relatively unfit) variants and increase the frequency of “good” (relatively fit) ones. (It is worth noting that a population can store many genetic variants at once, and those variants can help it to meet changing conditions as they arise.<br /><br />Diversity Revealed: From Atoms to Traits<br />Charles Darwin saw that random variations in organisms provide fodder for evolution. Modern scientists are revealing how that diversity arises from changes to DNA and can add up to complex creatures or even cultures<br />http://www.sciam.com/article.cfm?id=from-atoms-to- traits<br />Key Concepts:<br /><br /><br />* The idea that nature “selects” favorable variations in organisms was at the heart of Charles Darwin’s theory of evolution, but how those variations arise was a mystery in Darwin’s time.<br />* Random changes in DNA can give rise to changes in an organism’s traits, providing a constant source of variation.<br />* Certain kinds of DNA changes can produce major differences in form and function, providing raw material for the evolution of new species and even new human cultures.<br /><br />Putting Evolution to Use in the Everyday World<br />Understanding of evolution is fostering powerful technologies for health care, law enforcement, ecology, and all manner of optimization and design problems<br />http://www.sciam.com/article.cfm?id=evolution-in-t he-everday-world<br />Key Concepts:<br /><br /><br />* The theory of evolution provides humankind with more than just a scientific narrative of life’s origins and progression. It also yields invaluable technologies.<br />* For instance, the concept of molecular clocks—based on the accumulation of mutations in DNA over the eons—underlies applications such as the DNA analyses used in criminal investigations.<br />* DNA analysis of how pathogens evolve produces useful information for combating the outbreak and spread of disease. Accelerated evolution in laboratories has improved vaccines and other therapeutic proteins.<br />* Computer scientists have adapted evolution’s mechanisms of mutation and selection to solve problems.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-60955041811401415142009-04-16T05:26:00.000-07:002009-04-16T06:30:09.097-07:00How Evolution Can Lead to Consciousness<img style="display:block; margin:0px auto 10px; text-align:center;width: 304px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtphJWxhlztfb6Sj6S_FBIs26lbDq5HO2Bk32JtHZ_O5BzORfo54HrzPTVcpBkiVEx9ZJ8i89t8zR-lmyWr0QQUGqjcBjKU7sQsVhrKzxrqraobDByQBWx-bvcCzVmTbAH4cz629OZzTs/s400/mandala-of-evolution.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5325280774179524562" /><br />How could an inanimate, "unthinking" process such as evolution lead to consciousness? A most worthy question.<br /><br />Really, thought is a tool that can enhance the chances of survival - like sight, heart, and lungs. Thought is really just awareness of one's environment, being able to sense and respond in a way that enables you to find food, or to evade becoming food.<br /><br />In the competitive maelstrom of evolution, awareness is quite a useful tool - the basic plan of our brain evolved quite early, with Cephalaspis.<br />http://en.wikipedia.org/wiki/Cephalaspis<br /><br />For Cephalapsis, it was used as a way to avoid getting eaten.<br /><br />Our brains are an extension of that basic plan. But, why are we in particular so smart?<br /><br />Because that is our main tool for survival. Other animals have sharp teeth, fast legs, flight, etc - and those work just fine for them. They are intelligent as well, just not like us.<br /><br />Being so convinced of our utter uniqueness masks the fact that we are in fact evolved from lower life-forms, that though they are not as smart as us, have the same basic brain-plan as us.<br /><br />This is an extremely interesting article that explores this:<br />Animal Intelligence and the Evolution of the Human Mind<br />Subtle refinements in brain architecture, rather than large-scale alterations, make us smarter than other animals<br />http://www.sciam.com/article.cfm?id=intelligence-e volved<br />Key Concepts:<br />* The human brain lacks conspicuous characteristics—such as relative or absolute size—that might account for humans’ superior intellect.<br />* Researchers have found some clues to humanity’s aptitude on a smaller scale, such as more neurons in our brain’s outermost layer.<br />* Human intelligence may be best likened to an upgrade of the cognitive capacities of nonhuman primates rather than an exceptionally advanced form of cognition.<br /><br />In "Walking with Cavemen", part of that series explores the contrast between homo habilis and Paranthropus Boisei. The latter had a larger jaw, and could munch on the tough grass abundant in Africa at the time. Even during the dry season, boisei had abundant food. Habilis, on the other hand, had a much tougher time of it, didn't have the jaw structure to chow down on that, so it pursued a diversity of food sources.<br /><br />http://predictionboy.blogspot.com/2008/06/walking- with-cavemen-playlist-autoplay.html<br /><br />When the environment changed, Boisei couldn't adapt, and became extinct. Boisei was overspecialized, in other words. Habilis, on the other hand, was more adaptable, not least because of its larger brain, and survived.<br /><br />That series also explores how bipedalism led to changes in our chest and such, that led to the possibility of speech, of which Homo Ergaster seems to have been the first to possess. So bipedalism was important, and the main evolutionary selector for this seems to be that as the savannahs expanded in Africa, we could leave our tree-swinging ways. And bipedalism has a tiny energy advantage when walking around in a treeless environment, that over a lifetime might lead to, say, having one extra baby.<br /><br />With the Homo Ergaster video, it talks about how we "stared our way" into intelligence. As intelligence became our primary survival strategy, it increased over time, favored by natural selection. This is perhaps not too different from the evolution of saber teeth in some big cats. As our intelligence increased, other things, such as our teeth, claws, etc, decreased, which in turn made the intelligence more important as a survival strategy.<br /><br />After a certain point, especially after we developed the whites of our eyes and hence became more "expressive", much of our increasing intelligence seems to have been a result of needing to understand each other. One can see how this might lead to sort of an "arms race" of expanding brain capacity, as we needed to understand other humans, who were also increasing in intelligence.<br /><br />The series explores the many accidents of history that made possible the circumstances that eventually led to the rise of human intelligence. For example, when India collided into Asia 8 million years ago, dramatically changing weather patterns. This made Africa drier; up to then, it was pretty much coast to coast forest; after that, the savannah opened up, making moving out of the trees and bipedalism a successful path for our ancestors to take.<br /><br />Really, the number of unusual circumstances are pretty substantial that led to the rise of our intelligence. To me at least, there was nothing inevitable about it. And considering our brand of intelligence is the only time it has been pursued in 4.5 billion years is one of the main reasons I suggest other advanced alien civilizations might be quite rare.<br /><br />However, the main point here is that when you look at awareness and consciousness as useful tools for surviving and thriving, and hence in the right circumstances favored by evolution, it is not so mysterious as to how it could have occurred.<br /><br />But I agree with the thrust of this thread, our brains are damned amazing. But God or aliens are not required to explain them.<br /><br />This is an interesting article on some of the many artifacts of our fish and amphibian origins still with us today:<br />http://www.sciam.com/article.cfm?id=this-old-body<br />Key Concepts:<br />* Routing of nerves and fluid pathways in the human body resembles the tangle of wiring and pipes in an aging house, a heritage from fish and amphibian ancestors.<br />* The tube through which sperm passes forms a roundabout loop that can lead to hernias, a result of major anatomical changes that occurred as we evolved from fish.<br />* Nerves that are inherited from fish and travel from the brain to the diaphragm can become irritated and trigger hiccups, a closing of the entryway to the windpipe, an action that itself is a hand-me-down from amphibians that breathe with both lungs and gills.<br /><br />This article has really got me thinking:<br />How to Save New Brain Cells<br />http://www.sciam.com/article.cfm?id=saving-new-bra in-cells<br />Fresh neurons arise in the adult brain every day. New research suggests that the cells ultimately help with learning complex tasks—and the more they are challenged, the more they flourish<br /><br />The studies were done in mice, and it was shown that their brains produce between 5,000 and 10,000 new neurons a day, even in adult brains. It is extremely likely that something similar occurs in most animal brains, at least most mammals.<br /><br />We don't know what the numbers are for humans. However, as a wild guess, it would not seem unreasonable to suggest that in humans we might generate 10 times this number as that of mice - perhaps 50,000 to 100,000 fresh neurons a day.<br /><br />This reflects great evidence of the plasticity of even the adult brain of most animals. In the case of humans, with our nominally greater intelligence, we can begin to understand how such plasticity can accommodate things like the structures of civilization, religion, law, as well as technology.<br /><br />Interesting thing about religious belief. In the Walking with Caveman video entitled "Walking with Cavemen - Part 4 Survivors 1 of 3", available from the Caveman link above, it discusses Homo Heidelbergensis, who lived in Europe about 500,000 years ago. They were quite similar to us in many ways, but they did not have the conception of an afterlife, and hence, presumably no religious beliefs in general. So our capacity for religious belief, or need thereof, came some time afterwards.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-32091149569694555692009-04-01T08:56:00.001-07:002010-11-30T10:48:33.296-08:00The Most Amazing Droid Superpowers That You've Never Heard Of<h3><span style="font-style:italic;">The droidian "super-powers" of hyper-observancy, super-subtlety, and ultra-coordination will provide amazing utility to humans, and consume potentially vast multiples of human intelligence (HI).</span></h3><br /><img style="display:block; margin:0px auto 10px; text-align:center;width: 298px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHVebvvkr4O4-NVFtQf-anvYjvbnGLCo6cabcJhySR3ili5YEd3O4qYVkOuKFOykVaxOD2x90iW-kpgt4Wlncu4tA2zg4UEuxhUk6fE5ZOx8bFVzQqtPmNxRTImRaLFeDABw0_f03GKoY/s400/lucy-pinder-superman-shirt.jpg" border="0" alt="Droid super powers" id="BLOGGER_PHOTO_ID_5319756081719873474" /><br />As a critical set of tools to fulfill the consumer droid's <a href="http://predictionboy.blogspot.com/2007/09/heres-post-singularity-droid-already.html">design objectives</a> successfully, over time, immense multiples of HI will congregate around 3 amazing skills. They are so special that they can justifiably be called superpowers.<br /><br />Now, droids will have some superpowers you have heard of – very strong, very fast, etc. But with the superpowers I’m about to name, these will rarely be needed. So why have them? Well, when you need them, you need them – and it might prove useful to, say, have your droid be able to lift a car off of you, or catch a bullet for you with cobra-like speed.<br /><br />However, these traditional superpowers will pale compared to these, the most amazing superpowers you’ve never heard of – super-subtlety, hyper-observancy, and ultra-coordination.<br />To the extent it’s even clear what these are, they may seem kind of lame – but they are not, my friends – they are the keys to the kingdom. With these superpowers, a droid can keep its owner out of trouble without anyone even being aware that it’s doing much.<br /><br />Let me introduce these very briefly.<br /><strong>Super-Subtlety</strong> is the most unusual, and maybe the most powerful. It’s hard to explain, and will manifest itself in different ways, depending on the situation. Basically, defuse or otherwise influence a situation with a touch so deft the humans around it are unaware they are being influenced. This is important, because a droid can't boss its owner around, tell it what to do. And people then, like now, will be of variable characters, and some may have a knack for getting themselves into trouble. It is the job of that person’s droid to prevent that trouble, but in such a way that the owner thinks its his idea.<br /><br />Another example that comes to mind is from the original Star Wars. When they’re in the alien cantina, and the ugly dude is threatening Luke. Kenobi steps gently in and says, “this little one’s not worth the trouble. Come, let me get you something.” Now, it didn’t work for that alien, who gets a limb dropped as punishment for his carelessness. But I suggest that’s a great line, and would work in many situations here on earth. The droid’s first choice will always be non-violence in resolving any situation, because violence is harder to control once set in motion, and legal and other troubles could result.<br /><br /><strong>Hyper-Observancy</strong> is useful for feeding super-subtlety, as well as other uses. Basically, it allows the droid to be a sort of “information telescope”, able not only to take in vast amounts of information, but distill this info quickly into its most actionable form for its owner's benefit.<br />One example might be walking into a crowded bar, the kind with music blaring so loud you can barely hear yourself think. But the droid can parse and distill every conversation in the room, even with the noise in the way. Not for surveillance, there are ethical constraints for droids that we'll get into, but as far as it can go, for its owner's benefit. Maybe finding the table with the most interesting conversation, to set you up.<br /><br /><strong>Ultra-coordination</strong>. Basically, having the droid's brain be able to control its movements with extreme precision. Let me give an example.<br />Say you're droid is driving your car, you're in the passenger seat. It's raining pretty hard. All of a sudden, an 18-wheeler comes crashing across the highway median, heading straight for your car. To avoid getting creamed, there are two options - swerve to the right, but unavoidably hit a car in the way. To the left, there is just enough clearance for the car - 1/4 inch, let's say, something you as a human would never attempt. This droid does that, while you have a heart attack probably, comes out the other side, and proceeds on its way.<br />Let's go further, in case that's not impressive enough. Say you're driving, and the droid is in the backseat for some reason, maybe playing with your child, whatever. The same thing happens, the 18-wheeler comes barreling over straight at you. Now you, you are not so composed when faced with certain death. You freeze, holding the steering wheel as tightly as you can. The droid reaches across the back seat, grabs the steering wheel with one hand while you involuntary resist still holding the steering tightly - and the droid does the same thing as in the scenario above.<br />Now that's ultra-coordination, that's a feature set that will save lives. You will not want to leave home without this device. You will feel naked without it, invincible with it.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-92230921313554911482009-04-01T04:56:00.001-07:002009-04-01T16:33:42.104-07:00Why Droids Will Converge on the Human Form Factor<img style="display:block; margin:0px auto 10px; text-align:center;width: 355px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhH2HsRLIHYun9F5CW3lvClCvRtqpHqQnua-Yd9eOoe9SYSrvCqb1iKed3EJskwC49fNAytkdUcDGEtWnl8KgxEOAq2uMqz3zWm41TCOvxjxI_zvavbFvH5gKe6HgXj03fCAfMvYi4a8iM/s400/AI+-+23e21a226a7447db8bd970530d534354.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5319692216210655794" /><br />What form will these droids take? At first, probably for the next couple of decades, the form they take now, specialized droids like the roomba vacuum and such.<br />Next will be a generation of metal-men, like C3-PO, probably for another couple of decades.<br />Next is when things get really interesting. They will start to take the form of realistically human.<br /><br />Why human?<br />Several reasons we'll touch on briefly here.<br />1. Maximum compatibility with the existing infrastructure built for humans.<br />2. The human form will minimize the communication barrier between humans and computing devices, as low as it can go without being integrated into our bodies. The form of an external droid is actually better in most cases, because it can do chores and such, do more things than a piece of integrated biotech into our bodies. These droids will be realized before such integration of biotech is widespread.<br />3. Social situations. Yes, there are compelling reasons to want our droid to go with us into many social and public situations. We'll discuss this in detail in a later thread. Suffice for now that when these droids do accompany their owners, it is in the owner's best interest for this droid to be convincingly human to everyone else.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com2tag:blogger.com,1999:blog-3121909089397896391.post-82014829644606600642009-03-31T07:11:00.000-07:002010-10-25T10:27:29.013-07:00Invasive vs. Non-invasive Augmentation<span style="font-style: italic;">The focus of biological technologies will remain on the understanding of ourselves and other lifeforms, treating diseases, and extending high quality lifespans. On the other hand, actual augmentation of our intellectual and physical powers will, for the most part, happen by proxy via non-invasive devices.</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 275px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTIJ0Z25O687iB6htMiqrvHmt9CpC9RbedVjeelQ7iCOIDfC2U_qID0T-KNvwgW1hpwjLZXM1qLl9TQEESqtEzIsHUBvJVBZKwNAqNubOE1cm_BbMJDWXAB0MxXRVYmyAnkwMmLM5xhIY/s400/b168763955.jpg" alt="" id="BLOGGER_PHOTO_ID_5319693834195221922" border="0" /><br />In addition to carefully applying existing and well-established trends for understanding the future, there is another characteristic to all of the future technologies in this blog. They are all <span style="font-weight: bold;"><span style="font-style: italic;">noninvasive</span></span>, that is, they do not rely upon directly "plugging in" to the human brain. There are several reasons for this:<br /><br /><span style="font-weight: bold;">1.</span> It is unnecessary. The value-add of these technologies can be achieved without direct connections between the technologies in question and the human brain.<br /><br /><span style="font-weight: bold;">2.</span> Although many aggressive technology enthusiasts (such as frequent futurist forums) seem to be enamored with this idea of direct tech-to-brain interfaces, I do not believe that the vast majority of the consumer base will be anywhere near so enthused. As Woody Allen said in Sleeper, "No one touches my brain - that's my second favorite organ."<br /><br /><span style="font-weight: bold;">3.</span> Direct brain-to-technology interfaces will be subject to far greater legal penalties if something goes wrong. And it would not take many such suits to move the market from such devices.<br /><br /><span style="font-weight: bold;">4.</span> The technologies discussed in this blog are intended to be utilized for an arbitrary period of time, from a few minutes to several hours or more. Even something as simple as 3D glasses start to annoy after a short while. So although the technologies in this blog are completely compatible with such things as 3-D glasses, virtual-reality helmets, etc., and the software would be virtually identical, they do not require such devices to fulfill their design purposes of utility and/or imagination actualization.<br /><br /><span style="font-weight: bold;">5.</span> The technologies in this blog are meant to be utilized or enjoyed alongside "reality", not exclude the real world altogether. For example, a virtual reality-helmet in essence removes you from reality, it's difficult to impossible to know what's going on in the real world while wearing such an enveloping device. The technologies in this blog provide an extremely realistic virtual experience, without shutting out the real world entirely - you can still hear your baby cry, for instance.<br /><br /><span style="font-weight: bold;">6.</span> For a given amount of computational "augmentation", a separate, stand-alone device will generally be more economical than one directly integrated into the human brain. The demonstration of this is somewhat problematic, because Kurzweil keeps what he means by direct augmentation of the human brain quite vague, using terms such as "smarter" that could mean a great variety of things. However, let's pick just one, specific example - calculation of the square root of 2 to 1,000 decimal places. A stand-alone device with little or no AI could do this for an effective cost of a few pennies or less, and is available with even today's computer technology. However, to augment the power of a human brain to allow the calculation of this "in their head" as it were, is an infinitely more arduous and expensive undertaking. It's unclear what the best way to achieve this is, in any case - via a silicon chip implant? This smacks of "unnecessary surgery", and even if established to be feasible, seems like a highly optional procedure. This leads to the next concern:<br /><br /><span style="font-weight: bold;">7.</span> Brain augmentation - and for that matter, transhuman augmentation of any kind - even if proven safe and feasible, seems to fall into the class of "optional therapy" - ie, not life-saving in the sense of treating a chronic ailment that afflict the disease-stricken. Depending on the nature of the enhancement, it also seems like these kinds of augmentations could also be quite expensive. Extrapolating the current trend of the stinginess of health insurers, it seems reasonable, and in fact, quite reasonable, to suggest that they will NOT cover the expense of such optional procedures. This means that any transhuman enhancements will be out-of-pocket for those desiring such enhancements. If this quite likely scenario proves to be the case, this will mean that the economics will keep the procedures expensive and out of reach for all but the well-to-do. Kurzweil seems to suggest that these types of enhancements will become widespread and happily covered by insurers and/or demanded by the marketplace, but there is absolutely no evidence for this. That doesn't mean it's impossible, not saying that, but it is deeply improbable, and in any case Kurzweil gives no detailed explanation as to why this scenario would come to pass in the first place.<br /><br /><span style="font-weight: bold;">8.</span> A standalone device provides greater scope for offloading of tasks and activities than one directly integrated into the human brain. This ties into the key consumer driver of "convenience" and "making life easier" that is a standard part of successful products, technological and otherwise. If it's directly integrated into one's brain, it may make thinking easier, but it's still the human doing most or all of the work.<br /><br /><span style="font-weight: bold;">9.</span> Inputs and outputs can generally be better standardized and communicated via stand-alone technologies than those directly integrated into the human brain. For example, moving beyond the "square root of 2" scenario to something more involved, say, a detailed weather simulation. Again, this is something that current computers can do, even without appreciable AI as part of the software suite. The inputs for the simulation are input in a standard way, and are output to a screen or printer in a standard way (at least for a given software application). If all this is happening inside someone's head, how do you know the inputs are exactly right, and hence trustworthy? How does that person communicate the results of the simulation anyway - do they drive a monitor or a printer with their thoughts? These and other considerations help build a case that for a given computing task, direct integration into the human brain is in most cases vastly complicates the task, makes the results harder to communicate, less reliable, and much more expensive.<br /><br /><span style="font-weight: bold;">10.</span> All of the above considerations are given even more gravitas with the eventual development of sophisticated AI software that has one of its key objectives successful communication with human beings. This is still forthcoming, to be sure, but will be far ahead of direct brain integration of these technologies. As such, by the time such direct integration is feasible, it will prove unnecessary, as well as suboptimal for the above reasons.<br /><br /><span style="font-weight: bold;">11.</span> People, then as now, will value their privacy, and the privacy of one's most intimate thoughts will be guarded most jealously of all. Therefore, projections of a "global mind" composed of actual, commingling human minds is deeply misguided. A "one-off" global mind consisting of advanced, personalized AI systems that reflect the interests of their primary human interactant(s) but that can also precisely control what they share over such a global network will be far more desirable.<br /><br /><span style="font-weight: bold;">12.</span> Assuming that things such as computer viruses still exist, the idea of an actual "global mind" of commingling human brains that gets infected with a computerized pathogen is a nightmare scenario. Far better to have a "one-off", non-invasive system that gets infected with such a virus, simply because it will be simpler to cleanse such a technology-only platform than to cleanse a composite technological-organic brain of such a virus.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-63293203013843049472009-03-29T14:30:00.000-07:002009-03-31T09:19:13.741-07:00Analysis of Other Widespread Ideas About the Future<img style="display:block; margin:0px auto 10px; text-align:center;width: 255px; height: 320px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSe9vzBo9Bp0TnD0fMvsDkPF26jFBwYx28dIUvyihsQdLgKMZwGPAuOTh5gQBlGIduQaH9tFuIv50DEeGL4TJ0Y762y4QlTxEnqJ0E0Xvu6EndChSlUgdEc2sfraEtkl0huoUSRLkDRyU/s320/039_20431~Woody-Allen-in-Sleeper-Posters.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5318729676124862674" /><br /><a href="http://predictionboy.blogspot.com/2008/08/why-fear-of-rogue-or-dangerous.html">Why the Fear of Rogue or Dangerous Artificial Intelligence is Fundamentally Irrational</a><br /><span style="font-style:italic;">Examining the real reasons why many fear the prospect of advanced AI - the assumption, propagated by both science fiction writers and many futurists, that high intelligence necessarily converges on more-or-less the exact form in the human brain, a deeply misconceived notion.</span> <br /><br /><a href="http://predictionboy.blogspot.com/2007/10/exploring-theoretical-post.html">What The Singularity Might Really Be Like</a><br /><span style="font-style:italic;">Examination of how a "steeply-sloped" technological event might actually play out.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/12/prospects-for-computronium-universe.html">The Prospects for the Computronium Universe</a><br /><span style="font-style:italic;">Challenging the idea of the "Intelligent Universe," where mankind or his descendant technologies digest the universe.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/08/what-transhumanism-will-really-be-like.html">What Transhumanism Might Really Be Like</a><br /><span style="font-style:italic;">What form of hypothetical human enhancements will really make a difference?</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/prospects-for-robots-taking-over.html">The Prospects for Robots "Taking Over" the Workplace</a><br /><span style="font-style:italic;">Why any replacement of human laborers by advanced droids in the future will be piecemeal, demand-driven, and complementary to human intelligence and effort.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/08/prospects-for-post-captalist-economic.html">Prospects for a Post-Capitalist Economic System Replacing Capitalism </a><br />Exploring the question of whether future technologies will actually lead to a different economic system than the predominant one today, <a href="http://en.wikipedia.org/wiki/Capitalism" target="new">Capitalism</a>.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-41201867570445973262009-03-29T09:10:00.002-07:002010-10-05T05:19:34.377-07:00The Far Future<span style="font-style:italic;">Exploring some highly non-intuitive ideas about the far future of humanity, the dark side of the future, and where all all the aliens, anyway?</span><br /><img style="display:block; margin:0px auto 10px; text-align:center;width: 400px; height: 280px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR-bI1c8cZTQtJA-TNjhH729ntyaT0ixD-4lqcB81VOjypU3LHfwek9GMb85h3SkAb1FAywU_7_UhjJuzqGr3uI08J6D3lXBTKytqNnbYdhwASQFHgTezCVYP44-MR-1RK2gunt4bYSK4/s400/syd-mead-future-doha-qatar1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5318691981604428242" /><br /><a href="http://predictionboy.blogspot.com/2008/12/what-far-future-will-really-be-like.html">What the Far Future Will Really Be Like</a><br /><span style="font-style:italic;">How demographic stabilization and the maturation of sustainability technologies will combine to keep most humans on Earth.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/invasive-vs-noninvasive-technologies.html">Invasive Biological vs. Synthetically Distinct Advanced Technology Platforms</a><br /><span style="font-style:italic;">The many reasons for suggesting that key future technologies will retain a primarily distinct nature from their human consumers.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/04/dark-side-of-future-for-individual.html">The Dark Side of the Future for the Individual</a><br /><span style="font-style:italic;">Then as now, the trials and tribulations of most people will continue to come from within us, not from external agents.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/12/so-where-are-all-aliens.html">So, Where Are All the Aliens?</a><br /><span style="font-style:italic;">An independent line of analysis that supports the Drake Equation in terms of concluding that advanced alien life (if any) is probably quite rare; hence, any nearby is almost certainly far more ancient. From there, we explore the question of Fermi's Paradox not from the human-centric perspective, but from the possible motivations of the aliens themselves.</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-20981537418420999212009-03-29T09:10:00.001-07:002009-03-31T09:32:44.255-07:00Advanced Virtual Reality Technology<span style="font-style:italic;">Powered by a specialized form of advanced AI, the Hyperreality Engine will go far beyond your current notions of ultra-realistic, immersive experiences.</span><br /><img style="display:block; margin:0px auto 10px; text-align:center; width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbRFxz-DFEMQwnJnXWObnXDKEGSeFJwWaESSkWD8oEKcmF3RQ8JVz5JRWE_zPGwx9CCv5ZijP8E8Ed-EYdieGPZHzBDyomhpQmWqhrxzqWckhI2KRS819MRGcXtY_r0H-qAZwMxNJN4xo/s320/virtual-reality-6.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5318689423685592722" /><br /><a href="http://predictionboy.blogspot.com/2007/11/toward-hyperreality-engine.html">Toward the Hyperreality Engine Pt 1: Introduction</a><br /><span style="font-style:italic;">The basic ideas and trends behind the Hyperreality Engine.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/02/toward-hyperreality-engine-pt-2.html">Toward the Hyperreality Engine Pt 2: The Hardware</a><br /><span style="font-style:italic;">The hardware for the hyperreality engine is neither far-fetched, nor far off.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/02/toward-hyperreality-engine-pt-3.html">Toward the Hyperreality Engine Pt 3: The Software</a><br /><span style="font-style:italic;">The software for the hyperreality engine will be yet another form of specialized AI that in many ways actually be more complex and challenging than "greater-than-human" AI.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/02/toward-hyperreality-engine-pt-4.html">Toward the Hyperreality Engine Pt 4: The Applications </a><br /><span style="font-style:italic;">Some of the very interesting ways that the hyperreality engine could support the key consumer trend of <a href="http://predictionboy.blogspot.com/2007/08/imagination-actualization-in-noisy.html">"imagination actualization."</a></span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-48931638894427979442009-03-29T09:09:00.001-07:002010-11-30T10:35:22.763-08:00Advanced Robotics Technology<h3><span style="font-style:italic;">Forget C3-PO and R2-D2. Stunningly realistic droids of the future - both physically and psychologically - will be so compelling that we will consider them indispensable</span></h3><br /><img style="display:block; margin:0px auto 10px; text-align:center;width: 350px; height: 350px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDwViNWjya7r-XUT0UuQv9vmge1uw8YsfkaI0s0PvINSUo2C8JrVzeVdXgBbU48RPove_nwEhMynEphWSG-FGI0WtrOyfa7wIi0khaUAd8TLqz3C6QVTpNycHJoT2jBEQDbNKGN0Rpvmk/s400/lovely-fembot-actoid-der2.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5318693989909829826" /><br /><a href="http://predictionboy.blogspot.com/2007/09/heres-post-singularity-droid-already.html">Designing the "Smarter than Human" Droid </a><br /><span style="font-style:italic;">How might a product that is smarter than a human being in one or more ways evince that intelligence? What form factor would be optimal for most consumer applications? What useful "powers" might these droids possess?</span><br /><a href="http://predictionboy.blogspot.com/2009/04/human-form-for-robots.html"><br />Why Droids Will Converge on the Human Form Factor</a><br /><span style="font-style:italic;">For many segments of the droid market, the hyperrealisitically human form factor will make the most sense, for many reasons.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/04/droid-superpowers.html">The Most Amazing Droid Super-Powers You've Never Heard Of</a><br /><span style="font-style:italic;">The droidian "super-powers" of hyper-observancy, super-subtlety, and ultra-coordination will provide amazing utility to humans, and consume potentially vast multiples of human intelligence (HI).</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/09/welcome-to-far-future.html">Extremely Precise Sensory Awareness of the Artificial Mind</a><br /><span style="font-style:italic;">Exploration of key ways in which advanced droids will retain their essentially computer-centric nature.</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/08/hyperintelligent-droid-as-soldier.html">The Violent Droid</a><br /><span style="font-style:italic;">Certain areas of application may require a droid to be violent - such as, soldier and bodyguard. These will be approached in decidedly "non-Terminator" ways.</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-67633578911408855252009-03-29T09:08:00.000-07:002010-10-25T10:16:36.090-07:00Advanced Artificial Intelligence Technology<span style="font-style: italic;">The good news is that advanced AI technology will be amazingly useful, not at all malevolent, and quite knowable. The other side of the coin is that it will be among the most difficult technical feats ever achieved by mankind.</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 320px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju9P6bMXPzjaExCDwbLJcP50ZCfleqJ5bKqibK3p7HhI3y2nUa4XftrYP1O3cp9p7t9E9-_-XjEDhmGnCZ8X-IgFtNkT0DZkGz092ZywCdtrx3W-vjT6QO9-wDDkv4364PkQQgtqffXyQ/s320/brainsky.jpg" alt="" id="BLOGGER_PHOTO_ID_5318697124244416082" border="0" /><br /><a href="http://predictionboy.blogspot.com/2007/08/what-ai-will-really-be-like.html">What Artificial Intelligence (AI) will really be like</a><br /><span style="font-style: italic;">Advanced AI, no matter how intelligent, will be purely rationally controlled - and hence, knowable, non-threatening, and eminently useful.</span><br /><br /><a href="http://predictionboy.blogspot.com/2008/06/sigmund-freud-and-artificial.html">Sigmund Freud and Artificial Intelligence (AI)</a><br />(Including audio lectures explaining Freud's concepts of human intelligence)<br /><br /><a href="http://predictionboy.blogspot.com/2009/04/path-to-advanced-ai.html">Paths to Advanced AI - Engineered or Brute-Forced Brain Simulation?</a><br /><span style="font-style: italic;">Why engineered AI is the only viable approach to advanced AI</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/09/wheres-all-ai-anyway.html">Where is All the AI, Anyway?</a><br /><span style="font-style: italic;">Before we get too excited about the coming of advanced AI, we must ask a serious question - where are even the simple forms in widespread use today?</span><br /><a href="http://predictionboy.blogspot.com/2009/01/empirical-ai-architecture.html"><br />An Empirically-Derived Conceptual Architecture for Artificially Intelligent Systems of Arbitrary Complexity</a><br /><span style="font-style: italic;">Exploration of the common attributes of any software deserving of the moniker "artificially intelligent"</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/how-advanced-ai-will-learn.html">How Advanced Artificial Intelligence (AI) Will Learn</a><br /><span style="font-style: italic;">Advanced AI will depend as much on experience as raw intelligence in order to fulfill its design objectives well.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/ai-rational-analogues-for-human.html">AI Rational Analogues for Human Emotions</a><br /><span style="font-style: italic;">There is great intelligence in human emotions, and unfortunately, great unpredictably as well. One of the keys to safe AI will be to translate human emotions into rational analogues that are far more consistent and predictable in their operation.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/advanced-ai-in-government.html">Advanced AI in Government</a><br /><span style="font-style: italic;">Many suboptimal decisions by political leaders are based upon the sheer vastness of the data and analysis pertaining thereto. How advanced AI could help.</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/09/semi-advanced-killer-ai-app-not-so-dumb.html">Proposed Traffic Light Artificial Intelligence (AI) System as Part of a Comprehensive Strategy to Minimize Energy Use, Pollution, and Driver Time Wastage</a><br /><span style="font-style: italic;">Advanced AI will not be "one thing" - it will vary enormously, depending on the nature of the objectives for which it has been designed. This explores one possible application of a fairly simple AI application that could render tremendous value.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/microsoft-mapping-course-to-future-in.html">Microsoft Mapping Course to a Future In Line with "The Empirical Future" Blog</a><br /><span style="font-style: italic;">Microsoft's Laura is an interesting, early attempt at something like AI intended to interact with humans - and "her" process of observing her human interactants and deducing subtle cues regarding the character and attitude of these humans fits well with my ideas with regards to this technology.</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-77943988230379646362009-03-29T09:06:00.000-07:002010-10-25T10:24:50.813-07:00Introduction to the Future Prediction Process<span style="font-style: italic;">It is said that those who forget the past are doomed to repeat it. Similarly, ignoring the past and present when predicting the future dooms one to never know it. </span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 400px; height: 271px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEink-uCifqKKIyh_y3EsV7sjywPdRuk6KNr3YsS22c7CE8xqPB6i0xNJzUJgNwxdbXpjyiiA4Pyf0wMmDsvoKQ8fE2jNFMH6l_AF33Ah0z9rFKWFn42yAeg8JHd1d5zqmrCr8bIv-zT2Zo/s400/crystalball_468x317.jpg" alt="" id="BLOGGER_PHOTO_ID_5318695465023976546" border="0" /><br /><a href="http://predictionboy.blogspot.com/2007/08/future-in-focus.html">The Future in Focus</a><br /><span style="font-style: italic;">Introduction to the main themes of this blog.</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/08/processing-future.html">A Powerful and Comprehensive Process for Understanding and Predicting the Future </a><br /><span style="font-style: italic;">Description of the multi-pronged and complementary empirical approach underlying the predictions in this blog.</span><br /><br /><a href="http://predictionboy.blogspot.com/2007/08/imagination-actualization-in-noisy.html">Imagination Actualization in the Noisy Future </a><br /><span style="font-style: italic;">Description of the key assumptions underlying the nature of the future, particularly from the individual consumer standpoint.</span><br /><br /><a href="http://predictionboy.blogspot.com/2009/03/invasive-vs-noninvasive-technologies.html">The Nature of Future Technologies</a><br /><span style="font-style: italic;">The many reasons for suggesting that key future technologies will retain a primarily distinct nature from their human consumers.</span>Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-90058246945458244612009-03-28T09:19:00.001-07:002010-10-25T10:35:16.402-07:00Advanced AI in Government<span style="font-style:italic;">Many suboptimal decisions by political leaders are based upon the sheer vastness of the data and analysis pertaining thereto. How advanced AI could help.</span><br /><img style="display:block; margin:0px auto 10px; text-align:center; width: 320px; height: 208px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNzTX9VtMy2aKOg60onOXTeXaGpzrxzQNV01toH94b1x7_M5ZSjT9UPk4olKk_3d57lEDuBqv82hKG_8DkcK5FEFKkPMFdKz4jHi-6SZG4ovdHCiOD34Hik2YCEPwdsCCVrE2VWhzWdF0/s320/whitehouse2.jpg" border="0" alt="Advanced AI in government" id="BLOGGER_PHOTO_ID_5318275261167206194" /><br />It continually amazes me how little discussion of how real, advanced forms of AI might be used in specific applications to achieve great benefits for humanity.<br /><br />Let's focus on one specific scenario for advanced AI in the sphere of government.<br /><br />However, let's be real here, I don't mean in a position of "leadership". When discussions of this kind are enjoined, often many instantly and in my view quite incorrectly assume that AI of "greater-than-human" intelligence will simply be handed the reins, take over, as it were. This seems deeply unlikely, for the following reasons:<br /><br />1. Human beings want to be led by other human beings, not by technology. This is a deep thread of human nature; even if an advanced AI were nominally "better" at leadership, this would still be true, I believe.<br /><br />2. Even if humans were to get past the obstacle above, if an advanced AI were to take the reins of leadership, and messed up in one way or another, made one or more serious mistakes, who's to blame? Interestingly, not the advanced AI - it would be the manufacturer that is on the hook. The buck will not stop with a technology product - it will stop with human beings.<br /><br />3. If an advanced AI were to take over the reins of leadership, say become The President, think about it - that gives incredible power to the corporation manufacturing that advanced AI, which most people would find a repellent notion, I would suggest.<br /><br />For these and other reasons, advanced AI will not be in positions of leadership, elected positions in a democracy, if you will - even if they were nominally better than their human counterparts. So I would suggest setting aside those notions, at least for now.<br /><br />However, there are still vast realms of value-add that advanced AI could render to support efficient, effective governmental operations. Let's focus on just one - regulation.<br /><br />Currently, there are 150,000+ regulations on the books in the USA, and these grow by some 4,000 per year. Additionally, to understand these, one must also be aware of the body of legal decisions surrounding them, which are also huge.<br /><br />There are a truly vast number of regulations, covering all manner of things, such as the environment, workplace safety, etc. Some 100,000 employees in about 60 federal agencies administer these regulations.<br /><br />Estimates vary, but these regulations impose a cost of some $300 billion annually on the economy, mostly in the form of costs on businesses, not out of the federal budget per se.<br /><br />Now, these regulations vary tremendously in their effectiveness. A good way to examine them is in terms of cost-benefit analysis, and when discussing regulations, a valuable metric here is "cost per life saved". Some regulations are very cheap - $100 per life saved. Some, massively expensive, up to millions of dollars per life saved. Although we can get idealistic and suggest that any regulation, even if it saves one life, is worth it, no matter what the cost, this can be countered by suggesting that in a world of limited resources, a very expensive regulation that in fact saves very few lives, if those costs were allocated to regulations that save many more lives, that is in fact worth doing. It would be equally unrealistic to suggest that all these regulations are worthless, and should simply be swept away.<br /><br />However, objective rationalization of existing and proposed new regulations is eminently worthwhile.<br /><br />Currently, the primary responsibility for determining the cost effectiveness of various proposed regulations falls the Office of Management and Budget, the OMB. They have a budget of about $50 million, and a staff of maybe 50 or 60. They can currently assess about a quarter of the annually proposed regulation, 1,000 of the 4,000, and they presumably focus on the bigger ones, of course.<br /><br />When considering the effectiveness of not just the annually proposed, new regulations, but also the existing, massive body of regulations, and the cost-effectiveness of each one, and also the objective effectiveness of each one in terms of lives saved, species protected, whatever it is, I think we begin to enter an area that exceeds the ability of the human mind to analyze effectively and well.<br /><br />Enter advanced AI.<br /><br />What if you could retain, in your mind, the entire body of regulations on the books, all of the legal decisions surrounding those, with additional metadata with regards to total costs to implement, total lives saved, as well as potentially other important dimensions, say, locality and time.<br /><br />No human mind can do this, of course - but an AI could, because of course, it's really just an advanced computer, and retaining and processing large datasets is one of their core strengths.<br /><br />When looking at the political decision process, we often blame the "corruption" of lobbyists, politicians, etc. But really, I think most politicians want to make good decisions - however, the objectivity, comprehensiveness, and consistency of the information and resulting analysis that they receive impedes good decision-making. This gap is what is exploited by lobbyists and other special interests toward their own ends.<br /><br />This advanced AI application, it's just another application really, preferably within a human form factor as soon as technology allows, would work very similarly to the AI architecture I have already described.<br /><br />Essentially, it's analysis, and the summarizations thereof that it would provide, would vary with the audience with which it is interacting. If it was the head of the OMB, it might be one, higher level of summarization, estimating costs over time for a proposed regulation, possibly by locale, possibly with rough estimates of legal suits that may result, based on other, similar regulations.<br /><br />However, human domain experts could "drill down" with this AI - exploring each analysis result, each "assumption, all the way down to the raw data, if they so chose - individual legal rulings, specific costs, lives saved, etc.<br /><br />We come back to this paradigm of advanced AI as "liaison" between human beings and what we would recognize as functional analysis software today. Very important concept, one that I think will provide almost unlimited value in terms of real, live AI potentialities.<br /><br />This is just one example, of rationalizing the complex welter of government regulations, keeping the good ones, getting rid of the chaff - and possibly saving billions and billions in the process.<br /><br />There are many, many more examples like this, that we could name:<br />1. Rationalizing investments in infrastructure, the cost-benefit analysis decisions there.<br />2. Investments in Research and Development.<br />3. The optimal way to preserve our remaining natural ecosystems (which can be considered a subset of regulations, perhaps)<br />4. Etc.<br /><br />The point is, the sheer volume of data, the costs, lives, productivity enhancements, legal decisions, etc, are so vast that today, they necessarily must be split into literally thousands of individuals to analyze at all. And there, in an important sense, is the problem - having all of this data in one mind, being able to analyze it as one, integrated data set, there are huge advantages there, I'm sure all kinds of interesting insights and patterns could be discerned there.<br /><br />But, no human can do this - but an advanced AI could, in principle. In this sense, this application could be "smarter-than-human" - but once again, we see that this is in order to assist human partners, to help them make good decisions. Each human interactant with this advanced AI would have different questions, different avenues they might want to explore. The head of OMB, or the President, for that matter, one set of perspectives. A congressman, another. A lawyer, yet another. An economist or government statistician, yet another.<br /><br />All of these interactions would, in fact, help the advanced AI refine the analyses that it performs, to better assist the decision-making process and recommendations that all these humans make. It's a partnership, not a replacement paradigm, because many of these people aren't going away. Really, we're talking more about achieving much better results within an existing budgetary environment.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-72571935467279222622009-03-16T07:23:00.000-07:002010-10-25T10:34:13.889-07:00How Advanced Artificial Intelligence (AI) Will Learn<span style="font-style:italic;">Advanced AI will depend as much on experience as raw intelligence in order to fulfill its design objectives well.</span><br /><img style="display:block; margin:0px auto 10px; text-align:center;width: 314px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEef2Gp_U_qw5in-kUAYIWOdA8dsw741i3XM5cW_l5wfmK1os7IZPmt5QwQoAyqAsVaJmdtp1ZptP0iKbzhUf_iqgsrhLfJ5SQcMv8bSY73q47rprvVGQtKwyHrmdZxGJkqEbE4UvQ_oc/s400/alfalfa.jpeg" border="0" alt="How AI will learn" id="BLOGGER_PHOTO_ID_5313879327407493554" /><br />I recently received a question whose answer I thought worth sharing:<br /><span style="font-weight:bold;"><span style="font-style:italic;">"Is sentience like my own as a 27 year old man something that can exist in an AI as soon as it is turned on for the first time?<br /><br />Will it always take trial and error manipulating one's enviroment (to include interacting with other sentient beings) to learn the sentience of an average 27 year old male?"</span></span><br /><br />You pose interesting questions.<br /><br />I recently read a quote by Hans Moravec that I found interesting, and have been considering at length. I'm paraphrasing here, but it ran along the lines of "robots will eventually have exceedingly detailed models of human psychology.".<br /><br />Which I agree with. Models and algorithms that allow it to not only to interact with human beings successfully, but of course have awareness of its environment so that it can navigate that environment successfully.<br /><br />However, the way I heard it, there was something missing from Moravec's description - that is, data. Models and algorithms need data, of course, to do anything useful.<br /><br />Here's the thing about human intelligence - every single human's intelligence is different from every other's - intelligence, in fact, is far more variable than fingerprints. So one psychological "model" can not instantly know every human mind out of the box, because a successful interaction with one human might be unsuccessful with another. However, I believe that there are good, high-level psychological models of the human mind that do in fact successfully embrace all of humanity. My favored one is the Freudian model of id, superego, and ego. However, I was re-listening to a Teaching Company course over the last few weeks, "Great Minds of the Western Intellectual Tradition", which in 48 hours of lectures covers from the Pre-Socratics to the 20th century, and I was struck by something. Essentially all of them, every last one, struggled with these same 3 components of the human mind. They use a wide variety of terms, and their focus and assumptions vary, but that fundamental similarity across the board is striking.<br /><br />So I think we have a good start for a psychological model of the human mind - the animal passions, the moral and "instant judgement" portion, and the more or less objectively rational, reasoning portion.<br /><br />However, the degree, expression, and interaction of these three, especially when coupled with external factors such as education, culture, etc, form a near-infinite variety of specific human minds. I don't use the word "infinite" often, but given that the human brain is the most complex object in the universe that we yet know of, if it is justified anywhere, it would be in this case.<br /><br />Therefore, when you say, "turned on for the first time", I assume you mean the basic software and hardware are in place, but the experiential data are not yet acquired. If this is the case, I would say no, sentience like your own would not exist in an AI as soon as it is turned on for the first time.<br /><br />However, the true test of the power of the software and hardware in this device is not its instantaneousness, I would suggest, but rather, how well does it learn.<br /><br />So, how would it learn?<br /><br />Depends on what we're talking about. Some things can be learned from books, some things require face-to-face interaction, etc.<br /><br />I tend to think in terms of the value proposition of one these still hypothetical devices not so much in terms of how it could exactly mimic you (although that could be useful at times), but rather, in terms of how good it is at understanding, communicating with, and interacting with you. Remember, the most likely role these devices will serve is not to replace you, but to serve or assist you.<br /><br />I discuss at length in my blog why I believe these devices, in all their forms, will be rationally controlled at all times, though capable of simulating emotions, primarily for the benefit of the human(s) with which it interacts.<br /><br />Here's how I see the nature of their value proposition, as described in more detail <a href="http://predictionboy.blogspot.com/2007/09/heres-post-singularity-droid-already.html">here</a>:<br /><span style="font-style:italic;">"The primary directive of these droids will be to maximize its owner's weal and happiness. Because it is a consumer product, there are very real limitations to how overtly this droid can fulfill that directive. For example, telling its owner what to do, even if it's for his own good? No, not acceptable, a droid can't do that. The droid must have consulting skills, really - it must steer its owner in the best direction, while at all times having the owner believe it's his own idea. This is not a nice to have, it is absolutely critical.<br /><br />Also, part of the weal maximization mission is to help that owner be the best he can be, in all respects. So the droid will have to be a mentor, able to communicate in the very best way for that particular person. No matter how more intelligent than its owner the droid may be, it will never talk down to him, because that is a manifestation of arrogance, an id-thing. It will also be infinitely patient if that's required, because impatience is also an id-thing.<br /><br />Over time, the droid's "personality" will mold into a custom fit for its owner's personality, maximizing compatibility while maintaining respect."</span><br /><br />I then go on to describe in some detail the droid "superpowers" of hyper-observancy, super-subtlety, and ultra-coordination, which seem like a reasonable model for achieving the ends above.<br /><br />Because each person is different, initially yes, I would expect there to be some trial-and-error. In fact, early in the relationship, I would suggest that it is perfectly alright as part of the "training" process for the droid to ask clarifying questions, and of course for the human to explicitly give insights into his personality and/or general preferences such as, "I like this, and don't like that."<br /><br />Over time, there will still be trial-and-error, there will always be that, because not only are no two humans the same, no one person stays the same over time. Sometimes we're cranky, sometimes we're happy, and overall, can be quite unpredictable. However, the level of the trial-and-error, its "delta", as it were, will become steadily less, as the droid gathers more and more experiential data from that human or humans with which it interacts. In initial models, this process may take a while, weeks or months to get to a point where the human feels like, "yeah, this droid knows me pretty well."<br /><br />However, in more advanced models, this process could become faster, perhaps quite a bit faster, as well as more successful. And of course, unlike us, the experience "database" of these droids will have characteristics similar to computer data today - ie, it will be downloadable, storable, reloadable, etc. As these devices become widespread, and if the experiential data of these devices can be combined into a large database, with proper protections for the consumer identity of course, ie, behaviors with certain demographic attributes, but not their actual name (much like CRM data is handled today), then we could get to a point where advanced versions of these droids could in fact arrive "pre-loaded" with appropriately distilled insights from this larger database that could make them much more psychologically insightful "out of the box", if you will.<br /><br />I like the idea of the "personal droid", and believe it will converge as rapidly as technology allows on a hyper-realistically human form factor for a number of reasons I describe in my blog.<br /><br />However, the exact same approach could be imagined for any number of professional applications, such as a droid as an assistant to a scientist. Combined with knowledge gained from books or wherever on the domain knowledge, say, particle physics, and working with a scientist, observing how he frames his hypotheses, questions, etc, while of course providing tremendous value as an advanced AI liaison to the often vast datasets that the scientist utilizes to test his ideas, that droid over time could become quite a useful scientist. And if you go further and imagine a droid that works not with one scientist but with several, and can flexibly mix and match their various approaches to get the best results overall, now you're talking a very useful scientist.<br /><br />But, no matter how smart the droid, observation and analysis of both the humans around it and the world at large will always be essential. This is true for many reasons. Things simply change from day to day. But also, no matter how great the amount of experiential data, no matter how sophisticated the psychological algorithms running the droidian mind, you can never really know a specific human mind 100%, able to guess it's every thought, what it will say next, what its response will be in every conceivable situation. However, I expect that at some point, in very advanced versions of these devices, the "mistakes" the droid makes will fall below the human's ability to detect in most cases, which might qualify as near "perfection" from the human perspective, though I use that word very guardedly.<br /><br />What you call trial-and-error I would phrase slightly differently, in terms of hypothesis formulation, experimentation, and testing, with essentially every single human interaction being an opportunity for these. For example, in a given interaction, did the human register:<br />- Annoyance?<br />- Seem to understand what the droid just said, or was there confusion there? If so, why? How could the droid better phrase its language to have the human better understand him?<br />- Approbation, indicating the droid did something especially well?<br />- etc (there are many more possible examples)<br /><br />In other words, continually refining its internal models of that human's psychological profile, to achieve higher and higher "interaction success" rates. Interaction success can be defined in various ways, optimization of respect being one. If it's a professional relationship, such as a scientist assistant, in addition to optimization of respect, an additional criteria might be the productivity of the collaboration in terms of science achieved.<br /><br />The future will tell, I could be wrong. But in my view this seems a probable trajectory for advanced AI that actually understands human behavior and can productively and successfully interact with said humans.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0tag:blogger.com,1999:blog-3121909089397896391.post-60906072875508646792009-03-13T10:02:00.000-07:002010-10-25T10:17:34.904-07:00AI Rational Analogues for Human Emotions<span style="font-style:italic;">There is great intelligence in human emotions, and unfortunately, great unpredictably as well. One of the keys to safe AI will be to translate human emotions into rational analogues that are far more consistent and predictable in their operation.</span><br /><img style="display: block; margin: 0px auto 10px; text-align: center; width: 320px; height: 283px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZzmT20FlRLmIp3CR5AgilQH40ITyEjKboQiCBAp1qG8Ao1MwW_RtncjjdbU0Mb4lpXJuXYVlZqeTgTfh9AIg2f5zK3cNKWYN6e3n6Xc0jLSF6ocC9TJT4cyi4xzNJF_fEgW2g62UJwFo/s320/the-hulk-od-2003.jpg" alt="" id="BLOGGER_PHOTO_ID_5312720905125578450" border="0" /><br />I would suggest that, far from being the exception, it is almost universally assumed by most people that advanced AI, as it gets closer to nominally human levels of intelligence, will start evincing all or most of the characteristics of that human intelligence. I disagree strongly with this idea in many specific, self-consistent, logical, and empirically-based ways.<br /><br />However, I want to share another take on this notion that I have alluded to, but not explored in as much detail as perhaps desirable. Let's focus on emotions. Very interesting topic, have had a lot of good discussions here around that.<br /><br />Humans have emotions, and we consider them an integral part of what makes us special. However, emotions are almost universally shared with most other complex creatures, so really, in important ways they are the characteristic of our minds that are the most ancient and least special of all, at least in the sense that other animals also employ them in very similar ways.<br /><br />However, the key question for my purpose is, are emotions as we conceive of them really necessary for an advanced AI to successfully achieve and evince hyperintelligent (ie, greater than human intelligence) behavior? I would say, a rational understanding of the emotions of others is critical, especially the primary human interactant(s) with the AI.<br /><br />Clearly, emotions serve a useful purpose, which is why we have them. Let's examine a very useful one: fear.<br />http://en.wikipedia.org/wiki/Fear<br /><span style="font-style: italic;">"Fear is an emotional response to threats and danger. It is a basic survival mechanism occurring in response to a specific stimulus, such as pain or the threat of pain. Psychologists John B. Watson, Robert Plutchik, and Paul Ekman have suggested that fear is one of a small set of basic or innate emotions. This set also includes such emotions as joy, sadness, and anger. Fear should be distinguished from the related emotional state of anxiety, which typically occurs without any external threat. Additionally, fear is related to the specific behaviors of escape and avoidance, whereas anxiety is the result of threats which are perceived to be uncontrollable or unavoidable."</span><br /><br />As with many emotions, below a certain level of intensity fear is not incompatible with rationality. However, as with many or most emotions, above a certain level of intensity it can completely obliterate rational thought processes. This makes them unpredictable. This is critical for consideration of advanced AI. Advanced AI cannot be unpredictable in the same way we are.<br /><br />Let's examine another emotion: Anger<br />http://en.wikipedia.org/wiki/Anger<br /><span style="font-style: italic;">"Anger is an emotional state that may range from minor irritation to intense rage. The physical effects of anger include increased heart rate, blood pressure,and levels of adrenaline and noradrenaline. Some view anger as part of the fight or flight brain response to the perceived threat of pain. Anger becomes the predominant feeling behaviorally, cognitively and physiologically when a person makes the conscious choice to take action to immediately stop the threatening behavior of another outside force. The English term originally comes from the term angr of Old Norse language. Anger is usually derived from sadness.<br /><br />The external expression of anger can be found in facial expressions, body language, physiological responses, and at times in public acts of aggression. Animals and humans for example make loud sounds, attempt to look physically larger, bare their teeth, and stare. Anger is a behavioral pattern designed to warn aggressors to stop their threatening behavior. Rarely does a physical altercation occur without the prior expression of anger by at least one of the participants. While most of those who experience anger explain its arousal as a result of "what has happened to them," psychologists point out that an angry person can be very well mistaken because anger causes a loss in self-monitoring capacity and objective observability.<br />Modern psychologists view anger as a primary, natural, and mature emotion experienced by all humans at times, and as something that has functional value for survival. Anger can mobilize psychological resources for corrective action. Uncontrolled anger can however negatively affect personal or social well-being. While many philosophers and writers have warned against the spontaneous and uncontrolled fits of anger, there has been disagreement over the intrinsic value of anger. Dealing with anger has been addressed in the writings of earliest philosophers up to modern times. Modern psychologists, in contrast to the earlier writers, have also pointed out the possible harmful effects of suppression of anger. Displays of anger can be used as a manipulation strategy for social influence."</span><br /><br />As with fear, below a certain level of intensity, anger has a useful purpose for human beings. However, above a certain level, it often precludes rational thought, and is therefore unpredictable.<br /><br />A simple example of the inappropriateness of anger as a controlling mechanism can be given if we assume the form factor of a droid for an advanced AI.<br /><br />If you are the human interactant, and depending on your disposition, you may strike this droid. What should be the droid's response? Well, one response that is NOT permissible is for it to strike you back and knock you through the wall (I expect that these devices will be a good deal stronger than a person, as well as faster, these will be a key part of their value proposition).<br /><br />Now, maybe the human deserves to get thrown the wall, maybe he's a real a**hole, but that is irrelevant. A device that fights back in this way is unsafe, the manufacturer will get sued, and therefore this type of unpredictable control mechanism will never be designed into a device of this kind. Product safety is a huge and important trend now, and that will continue into the future.<br /><br />Advanced AI, no matter what its form factor is, can never leave the zone of rational control of every action it takes. If you think about it, this makes perfect sense. Human beings that keep their rational composure, even when someone is attempting to provoke them (this board has lots of examples of this, you don't have to look far), those people that stay in control of their emotions usually stay in control of the situation.<br /><br />Going back to our example, how should a droid respond in this case? What is the "rational analogue" for anger in this device, in other words? Well, it depends on the situation. If it is responding to its primary human interactant(s), the response should be neither slavish nor intimidating. It should be the response that maximizes the maintenance of respect felt by that human for that device. If this human tends to strike often, the response should perhaps be of a shaming nature, to discourage that human's propensity to strike others, which obviously is probably beneficial for that human anyway. If it is extremely rare, it depends on the circumstances, another potential response might simply be laughter, laughing it off. It is highly unlikely that a human is going to be able to actually injure one of these devices simply by striking it, so the response of the droid can be formulated entirely with the human in mind.<br /><br />For essentially every emotion, I suggest a "rational analogue" can be envisioned, that fills the role of that emotion, and in fact does it better than that emotion, at least in the context of advanced AI.<br /><br />How about sadness?<br />http://en.wikipedia.org/wiki/Sadness<br /><span style="font-style: italic;">"Sadness is an emotion characterized by feelings of disadvantage, loss, and helplessness. When sad, people often become quiet, less energetic, and withdrawn. Sadness is considered to be the opposite of happiness, and is similar to the emotions of sorrow, grief, misery, and melancholy. The philosopher Baruch Spinoza defined sadness as the “transfer of a person from a large perfection to a smaller one.” Sadness can be viewed as a temporary lowering of mood (colloquially called "feeling blue"), whereas depression is characterized by a persistent and intense lowered mood, as well as disruption to one's ability to function in day to day matters."</span><br /><br />This seems very unlikely to be a valuable control mechanism for an advanced AI. Witness the personality of Marvin the Robot from Hitchhiker's guide - that behavior would be excruciating in an actual droid, and would not be successful, I would suggest.<br /><br />However, here's a very interesting emotion: Empathy<br />http://en.wikipedia.org/wiki/Empathy<br /><span style="font-style: italic;">"Empathy is the capacity to share and understand another's emotion and feelings. It is often characterized as the ability to "put oneself into another's shoes", or in some way experience what the other person is feeling. Empathy does not necessarily imply compassion, sympathy or empathic concern because this capacity can be present in context of compassionate or cruel behavior."</span><br /><br />Now we're getting somewhere. This will be very important for successful advanced AI. However, this can very easily be approached by an equivalent rational analogue, whereby observation of its human interactant(s) over time provides a rich topographical picture of that person's personality, and hence a rational strategy for empathy devised that is not too shallow, or too indulgent. I suggest a convincing evincing of empathy will be an important part of droid hyperintelligence. But again, it is not "true" empathy - it is evincing of this emotion, for the benefit of its human interactant(s), to provide comfort to them or whatever.<br /><br />Here's another vital emotion: Curiosity<br />http://en.wikipedia.org/wiki/Curiosity<br /><span style="font-style: italic;">"Curiosity is an emotion that causes natural inquisitive behaviour such as exploration, investigation, and learning, evident by observation in human and many animal species. The term can also be used to denote the behavior itself being caused by the emotion of curiosity. As this emotion represents a drive to know new things, curiosity is the fuel of science and all other disciplines of human study.<br /><br />Causes<br />Although curiosity is an innate capability of many living beings, it cannot be subsumed under category of instinct because it lacks the quality of fixed action pattern; it is rather one of innate basic emotions because it can be expressed in many flexible ways while instinct is always expressed in a fixed way. Curiosity is common to human beings at all ages from infancy to old age, and is easy to observe in many other animal species. These include apes, cats, fish, reptiles, and insects; as well as many others. Many aspects of exploration are shared among all beings, as all known terrestrial beings share similar aspects: limited size and a need to seek out food sources.<br /><br />Strong curiosity is the main motivation of many scientists. In fact, in its development as wonder or admiration, it is generally curiosity that makes a human being want to become an expert in a field of knowledge. Though humans are sometimes considered particularly very curious, they sometimes seem to miss the obvious when compared to other animals. What seems to happen is that human curiosity about curiosity itself (i.e. meta-curiosity or meta-interest), combined with the ability to think in an abstract way, lead to mimesis, fantasy and imagination - eventually leading to an especially human way of thinking ("human reason"), which is abstract and self aware, or conscious.Some people have the feeling of curiosity to know what is after death.<br /><br />Morbid Curiosity<br />A morbid curiosity is an example of addictive curiosity the object of which are death and horrible violence or any other event that may hurt you physically or emotionally (see also: snuff film), the addictive emotion being explainable by meta-emotions exercising pressure on the spontaneous curiosity itself. In a milder form, however, this can be understood as a cathartic form of behavior or as something instinctive within humans. According to Aristotle, in his Poetics we even "enjoy contemplating the most precise images of things whose sight is painful to us." (This aspect of our nature is often referred to as the 'Car Crash Syndrome' or 'Trainwreck Syndrome', derived from the notorious supposed inability of passersby to ignore such accidents.)"</span><br /><br />Curiosity is a fundamental tenet of awareness, and forms the backbone of much scientific and technological progress. It is a very valuable emotion, and I would argue that a great deal of human rational intelligence evolved as a way to inform our curiosity more successfully. Therefore, although an "emotion", it is among the most rational of emotions, at least if not taken to extremes (as in the description of "morbid curiosity" above).<br /><br />Anyway, I'm still figuring this out. Bottom line, what I'm suggesting here is that the implicit, unpredictable, somewhat mysterious operations of emotions in human and other animal brains have "rational analogues" that can be explicitly programmed, are far more predictable, and hence safer for advanced AI.<br /><br />These rational analogues will form a natural extension of what I have argued elsewhere will be the rational control mechanisms for advanced AI/droids overall. The inherently rational control of these technologies argues for the knowability in their behavior patterns, even when they reach the point of "hyperintelligence", ie, demonstrably greater rational intelligence than humans. This is my central argument against the "unknowability" of the Singularity (actually, one of several, another being the continued involvement of human beings in their design objectives, for example). It is not entirely clear because it's never been explained by anyone to my satisfaction, but this notion of the "unknowability" of the Singularity seems to derive primarily from the idea that they will have essentially a biological nature to their intelligence, which I suggest is not necessary, safe, or really in any way optimal that I can imagine - except that this notion appeals to our heartstrings, which of course not a rational line of argument.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com1tag:blogger.com,1999:blog-3121909089397896391.post-24263683106878566772009-03-04T06:05:00.000-08:002010-10-25T10:38:32.265-07:00Microsoft Mapping Course to a Future In Line with "The Empirical Future" Blog<span style="font-style:italic;">Microsoft's Laura is an interesting, early attempt at something like AI intended to interact with humans - and "her" process of observing her human interactants and deducing subtle cues regarding the character and attitude of these humans fits well with my ideas with regards to this technology.</span><br /><img style="display:block; margin:0px auto 10px; text-align:center; width: 139px; height: 200px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0CIOaz68WC2typBjHN6nerHbLd8A3yjP40saRIUvsVluwGWqwKpDweQV0scIc19mf6GhoSD-GCaeHxQ6_1Ca5BarOsxUBOub_d1PHBkwM8BupUqIaeSMr3QQg2U5tg8t51BNKGifhS0w/s200/bill+gates.jpg" border="0" alt="Microsoft Empirical Future" id="BLOGGER_PHOTO_ID_5309341937118979010" /><br />Here's a most interesting article, discussing Microsoft and Intel's deepening investment in and vision for Artificially Intelligent applications:<br /><br /><a href="http://www.nytimes.com/2009/03/02/technology/business-computing/02compute.html?sq=microsoft%20jetsons&pagewanted=all" target="new">Microsoft Mapping Course to a Jetsons-Style Future</a> <br /><br />As I've indicated many times, Microsoft is quite a logical place to look for widespread AI apps, not just because of their prevalence and wealth, but because that wealth has allowed them to hire tons of top-notch researchers in those fields.<br /><br />In fact, the dearth of discernably "intelligent" software (as opposed to "functionally rich" software, which although not necessarily in principle different, in execution is in fact completely different) from Microsoft or elsewhere has been a major reason for my reservations about "where's all the AI?" all along. If Microsoft's, Intel's, and eventually other's plans to start to play out along the lines they hope, this could start to address my concerns in a major way.<br /><br />This article is an indication that, perhaps, purely "functional" software, though still important, may be close to the end of its rapid growth trajectory, and "intelligent" software, that explicitly incorporates AI techniques along the lines below (and, eventually, countless others), are finally coming to the fore as the next big source of software and hardware growth.<br /><br />And though not described in this article, Microsoft is also contributing importantly to the field of robotics, with the first "operating system" for robotic control (that's been around a while), so that not each robot has to have its own custom operating system.<br /><br />This supports the point I've made repeatedly that the best place to look for sophisticated AI is not from someone's garage, but from the currently recognizable computer hardware and software industries.<br /><br />And if that in fact becomes the case, you will NOT see AI that kicks your ass, tries to take over the world, or crap like that. You will see AI that is conceived, designed, and produced as consumer applications, for the purpose of making money. <br /><br />Some interesting quotes from this article:<br /><br /><span style="font-weight:bold;"><span style="font-style:italic;">"Built by researchers at Microsoft, Laura appears as a talking head on a screen. You can speak to her and ask her to handle basic tasks like booking appointments for meetings or scheduling a flight.<br /><br />More compelling, however, is Laura’s ability to make sophisticated decisions about the people in front of her, judging things like their attire, whether they seem impatient, their importance and their preferred times for appointments.<br /><br />Instead of being a relatively dumb terminal, Laura represents a nuanced attempt to recreate the finer aspects of a relationship that can develop between an executive and an assistant over the course of many years."</span></span><br /><br />If "Laura" can do things like the above well, she will be light-years ahead of Ramona, Microsoft's former PC "assistants" such as Bob and the paperclip, or any other chatbot I've seen.<br /><br />What's Laura doing? Deducing subtle cues from being observant of its human interactant(s), a key aspect of ostensibly intelligent behavior.<br /><br />I describe a far more advanced version of this same approach, in the context of droids, here:<br /><a href="http://predictionboy.blogspot.com/2007/09/heres-post-singularity-droid-already.html">Envisioning the Hyperintelligent Droid</a><br /><br />If you choose to explore this link, scroll near the bottom, where I describe the droid/advanced AI skills of supersubtlety, hyperobservancy, and ultracoordination. The reason for the "super", "hyper", and "ultra" designations is because this is for an ostensibly very advanced, hyperintelligent technology. Of course, before we get to these superlatives, there will simply being able to deduce subtle cues from observant behavior, which the path that Laura, and Microsoft and Intel, are on.Consultanthttp://www.blogger.com/profile/13217372105894829785noreply@blogger.com0