Integrated analysis of technology trends, human psychology, consumer and market dynamics, ethical perspectives and legal trends to discern the probable future
Monday, October 11, 2010
Your Brain as a Neural Network
Your brain is a neural network.
One statement that follows immediately from the fact that the brain is a neural network is that every computation performed by the brain is a collective computation, not some other kind of computation. Not a digital computation, not a quantum computation, not some other flavor of computation - collective computation and collective computation exclusively. Even thought processes that seem like they would be digital, such as adding 1 and 1, are in our brains performed via collective computation and associative memory.
An adult human brain has about 100 billion neurons. The number of synaptic connections is not precisely known, but estimates vary from 100 trillion to 500 trillion. Not just what neurons are connected to which others (the “topology” of the neural net), but also the strengths, the “resistance” (in electrical analogue terms), of each of those synaptic connections are also important to the collective computation process. If you vary the connections, you potentially get a different computed result. If you vary the connection strengths, you potentially get a different computed result. If you vary both, you potentially get a still different computed result.
The various neural nets and sub-nets in your brain perform an incredible diversity of processing tasks. They process sensory information; they control the movements of your physical body (your nervous system should be thought of as an extension of your brain’s neural network); they recognize the letter “e”; they perform feats of imagination; and countless other tasks.
The net of nets that is your brain does all these things, and many more, and it does it all exclusively through collective computation and associative memory.
As we learn to develop artificial neural networks that perform ever more complex processing tasks, there is another aspect to our biological brains that we have barely begun to tackle.
That is, that our brain is not just a bunch of neural nets, each doing its own processing functions; these nets are tightly interwoven into a highly synchronized, integrated processing entity. This means that there are neural networks that control other neural networks, which in turn control still other neural networks, and so on. The output of all these collective computations then flow back up the chain of neural nets to execute the “output” of whatever computation is being considered – say, swatting a fly, writing the letter “e”, or simply staying in the brain as a thought. The complexity of designing a neural network to perform even one specialized computational task of nontrivial complexity can quickly become daunting; designing a hierarchical system of nested, hierarchical neural nets that can perform a range of different computational tasks under one hood (so to speak) is far greater still. Our ability to design and build artificial neural nets that control other neural nets in complex yet stable and consistent ways is in its earliest infancy.
Because the outside world is an extremely information-rich and ever-changing place, it is imperative that our brains be able to constantly process, accommodate, and successfully adapt to new information. There are at least two mechanisms to achieve this.
One is adding new neurons. Very recently it has been discovered from detailed experiments with rats that their brains grow about 10,000 new neurons a day, throughout their lives. A further, truly fascinating result is that if the animal is not learning something new, many of these neurons don’t stick around and die off quickly. If the animal is learning something new (the experiments concerned mazes and such, various standard rat intelligence training exercises), that knowledge is incorporated by the wiring up of these new neurons with existing neurons to form a permanent addition to the rat’s brain.
This is a process that is common to at least all mammals (and probably most others creatures with brains as well to some extent) – including of course, humans. Especially humans, since our brains are our main evolutionary survival strategy. The exact rate of neuron production in humans is not known exactly, since that involves direct physical examination of the brain (many rats died to bring us this knowledge, in other words) – but I would venture that a reasonable and perhaps even conservative estimate for the new neurons in humans might be something like 10 times the rat number – say, 100,000 neurons a day.
An interesting additional insight from these studies was that some factors can accelerate this natural rate of neuron production, such as exercise and certain foods. In addition, other agents such as alcohol were shown to inhibit this natural rate of neuron production – so drinking and drugging does reduce your ability to acquire new knowledge, at least to some extent.
Once incorporated into the brain, this neuronal topology of connections seems to be more or less fixed.
In addition to the neuron connection topology, there is the other dimension of neural network flexibility that must be considered – that is, the connection strengths of these myriad connections. If you vary the connection strengths, you potentially alter the computed result – the stable, self-consistent “answer” or response output of the neural net.
While the physical connection topology of the brain may be more or less fixed (with the exception of the new neurons coming on board every day), this is not the case with the connection strengths. These vary widely via the passing of ions across the synaptic channel, which can produce the chemical equivalent of changing the resistance (or voltage drop) in an analogous electronics circuit. This is happening on a grand scale in our brains, where the resistance profiles vary based on the inputs produce a circuit that converges on a self-consistent stable state. This could allow for very efficient neural nets that can, say, recognize many human faces with far fewer neurons than if each face had a dedicated neural network with its own, dedicated neurons.
I don’t want to get too detailed in terms of how a biological brain achieves this, but here is a very brief description. Synapses exhibit a behavior called "spike timing plastic dependency", which is thought to be the possible basis for memory and learning in human and other mammalian brains. The synaptic connection between neurons becomes stronger or weaker, as the time gap between when they are stimulated becomes shorter or longer.
These characteristics – adding new neurons and adjusting the strength of the connections between existing connections – endow the neural network in our head with programmability – the ability to accommodate new information and behaviors, where the brain’s control centers and the inputs from the external world can modify the brain’s “computed result” (which is of course many computed results for even simple tasks) over time.
The human brain is the most complex single object in the universe that we know of (with apologies to any more advanced alien brains that may be out there). The brain is an expensive organ, using something like 30% of the body’s energy budget, but only 7% of its mass.
The complexity of the brain’s processes translates into two main factors – the neural network itself, keeping all those neurons and synapses alive and well, and the “software” to run it, the mechanisms for bringing in new neurons and changing the connection strengths of the existing ones. Without these mechanisms, our brains, no matter how large and complex, would be brittle and limited once they were filled with knowledge (which given the sensory richness of the world would not take long).
Subscribe to:
Post Comments (Atom)
1 comment:
Nice Article, especially about the fixed/non-fixed components of the Brain. I have been working on a Brain theory for some time and this information would play very useful in my research. Can you suggest any books/resources for additional learning in respect to Brain's topology and dynamics? Any sources that helped with this article would also be greatly appreciated. Thanks again!
Post a Comment