Archive for Uncategorized

Belief control II: cults

Today I ran into an article on The Psychology Behind Cults/Religion on Digg.

The article introduces the similar processes in which beliefs of new recruits are being controlled in religions, cults and other belief circles. It neglects to mention however, that it is far from certain that these belief circles are actually consciously manipulating their recruits for the extraction of cash.

Remember: an unconscious lie is spoken as conscious truth – making it far more convincing and dangerous.

Comments

Shameless self-promotion

Today I received a note from George Garrett with some comments on my work:

“I very much enjoyed reading your paper on Benevolence.  It introduced me to some new ideas and seems like an excellent starting point for a plausible way one can frame morality, goodness, suffering and pain.  It seems like framing things in terms of evolution is the only way that things make sense.”

Music in my ears! But there is more:

 “This is the first satisfying definition of goodness I’ve come across that doesn’t seem arbitrary and up to the author’s whim.”

Strong and encouraging words indeed – many thanks to you George. Based on his comments I also updated my paper on friendly AI theory to version 1.1

Do you have any comments? I would love to hear from you!

Comments

Understanding human inter group competition

On page 85 of Jame5 I point out that:

“Culture is the byproduct of an animal’s acceptance of a shared moral-ethical meme complex to enable social collaboration in large groups.”

Later in the book I broaden the concept of a ‘moral-ethical meme complex’ to include all kinds of beliefs and assert that said shared beliefs are fitness indicators relevant for inter group competition. As a consequence groups with fitter belief systems prosper while groups with unfit belief systems either adapt or perish. For a quick introduction to my thoughts on this issue I suggest reading my paper on friendly AI theory or Jame5 pages 69 and following.

In genetics the concept of group selection is controversial at best. On the memetic level however it becomes intuitively obvious. Let me explain:

With the advent of human thought the focus of evolution shifted away from a genetic level and moved to an evolution of ideas and concepts about the world that gave rise to new ideas etc. The genes, dominant fitness determining information-carrying vehicles up to that point became secondary.

The decisive difference between the Homo Sapiens and other primates was the particularly useful ability to transfer these memes to other members of the group, including their young, by effective communication in the form of speech.

From that time forward, evolution on the genetic level slowly began to retreat and eventually became secondary as fitness indicators in humans as memes started to have an ever larger impact in determining an individual’s fitness in the group as well as on an inter group level. The evolution of memes went on though the Stone Age and various metal ages on a material level until it shifted toward harnessing more energy with the first steam engine in the late eighteenth century. What followed was the Industrial Revolution. Then came the first computers that eventually triggered the Information Age.

In summary: Human groups act as super organisms on the basis of shared beliefs with evolution continuing on the level of beliefs (memes).

Example: Captitalism vs Communism
The cold war was a period of conflict between two groups with largely different belief systems. In the blue corner mostly capitalist democracies and in the red corner mostly communist dictatorships. Capitalism eventually ‘won’ as its belief system happened to allocate resources with alternative uses more efficiently and effectively. By now the former eastern block largely abandoned the less fit ideology and is moving on.

Example: Market Economy
In market economies companies can be seen as groups competing for the scarce resource money. A company’s culture, policies, processes, intellectual property are its beliefs and its staff form that companies embodiment as a group. Companies compete in the market place, act, adapt, learn and form alliances. Those with fitter belief systems survive and flourish while those that are less fit go bankrupt and ‘die’.

And so evolution continues.

Comments

Estimating cognitive evolution’s complexity boundary in humans

As touched upon earlier genetic evolution is complexity bound. To be exact to about 25 megabytes because roughly speaking, genetic evolution isn’t going to support more than 10^8 meaningful bases with 1 bit of selection pressure and a 10^-8 error rate.

Reflecting on this complexity boundary in genetic evolution I was wondering what cognitive evolution’s complexity boundary might be in humans. As basis I will assume that:

1) cognitive evolution in humans is taking place on the level of beliefs (a brief summary can be found in my paper on friendly AI theory)

2) beliefs are stored in the neural structure of the brain

3) the informational complexity of the neural structure of the brain that stores beliefs is equal to cognitive evolution’s complexity boundary in humans

Being a friend of Google I quickly came across this interesting estimate of the informational storage capacity of the human brain:

“The human brain contains about 50 billion to 200 billion neurons (nobody knows how many for sure), each of which interfaces with 1,000 to 100,000 other neurons through 100 trillion (10 14) to 10 quadrillion (10 16) synaptic junctions. Each synapse possesses a variable firing threshold which is reduced as the neuron is repeatedly activated. If we assume that the firing threshold at each synapse can assume 256 distinguishable levels, and if we suppose that there are 20,000 shared synapses per neuron (10,000 per neuron), then the total information storage capacity of the synapses in the cortex would be of the order of 500 to 1,000 terabytes.”

Staying on the safe side I will assume that this estimate is off by two orders of magnitude and that only one percent of the human brain is actually involved in storing beliefs. As a result I estimate that human cognitive evolution on the level of beliefs is bound by a complexity of no less than 100 gigabytes or at least 4’096 times higher than that of genetic evolution.

Comments (1)

Self improvement versus creating a non-eudaemonic dystopia

I recently read Nick Bostrom‘s paper on the future of human evolution. The paper was published in 2004/5 and his views correlate with mine quite well. I am pleased to note that I am only about two to three years behind the times in having formulated my thoughts on the issue at hand. Ha! Not bad for an amateur. Moving forward…

Reading Bostrom’s paper was fascinating. In essence he makes the point that continuing to increase fitness will result in a dystopian world when measured with present human values and I agree. From the perspective of a present day human the evolution towards non-eudaemonic agents as Bostrom puts it seems like a scenario one has evolved to dislike. Since we have evolved to regard as good what has increased fitness in our ancestors we would have to fail to see anything unrecognizable human as a desirable future state. But is the deep desire to improve oneself not just as well part of human nature? Where but to something posthuman shall such self improvement lead if we for ever regard what is desirable from our current perspective?

Self improvement can be seen as a series of gradual changes. Consider the following scenario. A person approaches the matter of self improvement in a way to ensure that every improved following version of his self will be desirable from the unimproved version’s point of view. How desirable will the 100th improvement look from the point of view of the original? How about the 1 millionth? No matter at what improvement step the original will draw the line – at some point the improved version will turn into something that is unrecognizable, incomprehensible yes even scary to the original.

How do you picture the encounter between an early rodent – one of our direct ancestors a few 10 million years ago – and a modern day human. The rodent would probably flee in panic and some humans likely as well. But would the rodent lament over the sad abandonment of gnawing on stones? After all it is enjoyable and keeps ones teeth in shape. Or would it – having the full understanding of a human being – appreciate that other concepts, worries and habits are what a human holds dear in modern times? Which perspective take priority? “Of cause the human one!” is what one would expect from the anthropic chauvinists’ camp . But would the one millionth improved version as discussed earlier not argue the same for its manifestation?

Reconciling the desire to satisfy the ever changing current representation of an individual with the desire for self improvement and the implications for the future of human evolution becomes the challenge that needs to be addressed. Bostrom does so by suggesting what he calls a Singleton – an entity policing continued human evolution to maintain the status quo.

In the context of my friendly AI theory I suggest a similar approach to Bostrom’s Singleton however honoring Ben Goertzel‘s ‘voluntary, joyous, growth’ concept and thus allowing for the possibility of continuous self improvement.

Specifically I argue for a friendly AI to

A) change the environment(s) humans are in to increase an individual’s fitness as opposed to changing the genetic/memetic makeup of and individual to adopt it better to it’s environment.

B) reconcile our desire for self improvement with the problematic results discussed above by making growth optional as well as rewarding.

Comments (2)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »