Brain simulations stomping forward

After my recent update on Whole brain emulation the BBC is now reporting on real world research that ups the ante in the race to create a functioning simulation of ever bigger brains:

IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do.

The longer-term goal is to create a system with the level of complexity of a cat’s brain.

Prof Modha says that the time is right for such a cross-disciplinary project because three disparate pursuits are coming together in what he calls a “perfect storm”.

Neuroscientists working with simple animals have learned much about the inner workings of neurons and the synapses that connect them, resulting in “wiring diagrams” for simple brains.

Supercomputing, in turn, can simulate brains up to the complexity of small mammals, using the knowledge from the biological research. Modha led a team that last year used the BlueGene supercomputer to simulate a mouse’s brain, comprising 55m neurons and some half a trillion synapses.

“But the real challenge is then to manifest what will be learned from future simulations into real electronic devices – nanotechnology,” Prof Modha said.

Technology has only recently reached a stage in which structures can be produced that match the density of neurons and synapses from real brains – around 10 billion in each square centimetre.

Does anyone else find it just a bit ironic that they aim for a cat brain next after having simulating a mouse brain at one tenth’s real time in April 2007?


  1. Jonathan said,

    November 21, 2008 @ 9:58 pm

    This is super fascinating, one of the most interesting and important meldings of biology and computer stuff that people are doing. How many neurons in our brains? How many synapses? How can we model the astronomical numbers? Do we have to grow computer brains from computer embryos to get the connections right?

    So cool.

  2. Stephen said,

    November 27, 2008 @ 1:26 pm

    Pretty interesting. One pole of the debate is that the human brain is WAY more complex than those pursuing the mimicry of human intelligence believe. Others cite the fact that IBM’s machine can defeat the best chess Grandmasters as an interesting counterpoint, but this may have to do with the limitations of chess in terms of what is actually human intelligence (no “social brain” required). Another potentially long discussion here.

    It is interesting that the authors cite the achieving a density of connectivity that can approximate estimates of brain synaptic density. Within this density, the question as to how connections are guided/determined would seem to be a crucial question. The genius of human brain development is in the guiding of connections–and the selection of connections as relates to situational factors (experience, as transmuted and interpreted by the brain). And then the potential to change–to “prune” and re-shape connections, as experiences (“life”) changes.

    But then, maybe this “development” is itself an artifact of our being biological beings. Maybe “experience” (and the associated subjectivity that we treasure) are expensive artifacts of what we call our “intelligence,” at least according to “Western” values. Maybe the post-singularity world will dispense with ego: NIRVANA!!.

  3. Jame5 » Cognitive computing update said,

    February 7, 2009 @ 5:57 pm

    […] with the Blue Brain project as well as advances in memristor based electronics this project’s stated goal of building […]

  4. Dr. Ronald J. Swallow said,

    January 21, 2010 @ 1:28 pm

    It is virtually impossible to measure by probes the neocortex detailed structure and function. The behavior of the neocortex must be logically deduced from what can be seen and grossly measured. One important logical deduction is that neurons cross-correlate input axonal excitation patterns with input connectiion strengths to a receiving neuron and that cross-correlations are generally worthless unless they are normalized. There is no way this can be measured in the cortrex. Yet, those of us in the electrical engineering profession know the importance of normalization in order to get useful pattern recognition from correlators. Secondly, the excitatory neurons must be able to compare their psps in order to perform pattern recognition.
    I have studied mutually inhibiting groups of neurons and find that they exhibit normalized pattern recognition behavior when the excitatory to inhibitory neuron connection is set large enough so that the inhibitory neuron fires if any excititory neuron fires and that normalized behavior results if the inhibitory neuron connections vary by the same conditioned reflex rule as the excitatory connections. I predict that the neocortex simply consists of layers of mutually inhibiting neurons and that simulations of the neocortex need this constaint in order to be useful.
    Thirdly, ASIC approaches to building an efficient hardware simulator need to employ around 20 of the largest DDR memorys available today per single ASIC chip in order to optimize the design. Thus, thinking that using ASIC chips with only internal memory is very inefficient. Also, modeling neuron nets to the point of detailing the pulse shape behavior is quite wastefull since what neurons best respond to is the frequency of pulsing for pattern recognition purposes. That is, a group of neurons at a given frequency produce a steady psp, useful for pattern recognition purposes. The individual pulse effects are themselves quite noisy.
    I have studied neuron models for the last 40 years and have concluded that the above conclusions are likely true. The fact that neurons can do normalized correlations is a likely indicator that those of the human are using that property to its advantage.

    Ronald Swallow
    610 533 7091

RSS feed for comments on this post · TrackBack URI

Leave a Comment