More encouraging feedback

3 updates in one day – it is getting out of hand, I know…

I have stumbled across two encouraging posts about Jame5 that I would like to share. The first is by Constantin Gonzalez from his BarCamp Munich 2008 summary:

Another great way to think about the future is to read Stefan Pernar’s sci-fi thriller “Jame5 – A Tale of Good and Evil”. This book starts in the best Michael Crichton style and then becomes a deep and thoughtful discussion around the philosophy of the future, when mankind confronts the powers of strong AI. […] Highly recommended.

Secondly Marc Garnaut was kind enough to write about Jame5 on his blog:

I’ve been immersed in a book recently. It’s a fictional story, but it’s based on a lot of scientific fact. A bit like The Matrix or anything in the cyberpunk genre by authors like Neal Stephenson or William Gibson.

Much obliged gentlemen, much obliged indeed. Did I mention that I am still looking for a publisher? Hint! Hint! 🙂

Comments (2)

Scientific manipulation of beliefs

A very interesting piece on the maliability of beliefs using the example of the origin of life being the result of devine creation or a result of natural laws has been put up on Science Daily. Belief control being a central concept of cognitive evolution in Jame5 I found the article rather fitting.

Comments

Moore’s Law alife and kicking for the foreseeable future

The folks over at Future Blogger are reporting in a very detailed piece that there is no end in sight for Moore’s Law. They assert that when the prevelant technology in microchip manufacturing since the late 1960s called CMOS will hit a brick wall in 2011, chip manufacturers will have to resort to nanotechnology for feature sizes below 22 nanometer.

They foresee a move from largely 2D chips towards 3D chips and acknowledge the importance of the recently discovered Memristor as well as breakthroughs in molecular transistor technology. Both significant discoveries in regards to keeping Moore’s Law relevant in the next decades.

The exact quote happens to escape me, but I recall having read something along the lines that in principles there are no natural laws preventing continued miniaturization in computation all the way down to the Planck scale.

Comments

Brain simulations stomping forward

After my recent update on Whole brain emulation the BBC is now reporting on real world research that ups the ante in the race to create a functioning simulation of ever bigger brains:

IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do.

The longer-term goal is to create a system with the level of complexity of a cat’s brain.

Prof Modha says that the time is right for such a cross-disciplinary project because three disparate pursuits are coming together in what he calls a “perfect storm”.

Neuroscientists working with simple animals have learned much about the inner workings of neurons and the synapses that connect them, resulting in “wiring diagrams” for simple brains.

Supercomputing, in turn, can simulate brains up to the complexity of small mammals, using the knowledge from the biological research. Modha led a team that last year used the BlueGene supercomputer to simulate a mouse’s brain, comprising 55m neurons and some half a trillion synapses.

“But the real challenge is then to manifest what will be learned from future simulations into real electronic devices – nanotechnology,” Prof Modha said.

Technology has only recently reached a stage in which structures can be produced that match the density of neurons and synapses from real brains – around 10 billion in each square centimetre.

Does anyone else find it just a bit ironic that they aim for a cat brain next after having simulating a mouse brain at one tenth’s real time in April 2007?

Comments (4)

Whole brain emulation roadmap now available from FHI

Strides are being made towards in fact emulating the human brain as the Institute for Ethics and Emerging Technologies reports:

The Future of Humanity Institute, founded and run by IEET founder and chair Nick Bostrom, has just published a roadmap of the scientific research and technological innovations required to eventually completely model the human brain in software.

Whole brain emulation (WBE) is the possible future one-to-one modelling of the function of the human brain. It represents a formidable engineering and research problem, yet one which appears to have a well-defined goal and could, it would seem, be achieved by extrapolations of current technology. Since the implications of successful WBE are potentially very large the Future of Humanity Institute hosted a workshop in Oxford on 26-27 May, 2007. Invited experts from areas such as computational neuroscience, brain-scanning technology, computing, and neurobiology presented their findings and discussed the possibilities, problems and milestones that would have to be reached before WBE becomes feasible. The result of the workshop is the following roadmap.

Progress seems unstoppable….

Comments (1)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »