Archive for Uncategorized

Absolute irrationality

Considering the effects of relativistic irrationality one wonders if there is a universally applicable utility function that can not be rationally objected to. Consider axiom 1.2.3.2 that I base my concept of morality on:

1.2.3.2 To exist is preferable over not to exist

Objecting to this statement would consequently be equivalent to self annihilation. Reformulating axiom 1.2.3.2 into a utility function one could formulate an unobjectionable utility function as following:

Ensure continued co-existence

Not only can an individual not rationally object to that but no one in a group can rationally object to an individual having said goal. The individual can not because it would imply the desire for self annihilation and the others can not because it would imply the desire for being annihilated. Any objection to the above utility function can thus be considered irrational.

Comments (3)

Relativistic irrationality

Imagine two agents A(i) each one with a utility function F(i), capability level C(i) and no knowledge as to the other agent’s F and C values. Both agents are given equal resources and are tasked with devising the most efficient and effective way to maximize their respective utility with said resources.

Scenario 1: Both agents have fairly similar utility functions F(1) = F(2), level of knowledge, cognitive complexity, experience – in short capability C(1) = C(2) – and a high level of mutual trust T(1->2) = T(2->1) = 1. They will quickly agree on the way forward, pool their resources and execute their joint plan. Rather boring.

Scenario 2: Again we assume F(1) = F(2), however C(1) > C(2) – again T(1->2) = T(2->1) = 1. The more capable agent will devise a plan, the less capable agent will provide its resources and execute the trusted plan. A bit more interesting.

Scenario 3: F(1) = F(2), C(1) > C(2) but this time T(1->2) = 1 and T(2->1) = 0.5 meaning the less powerful agent assumes with a probability of 50% that A(1) is in fact a self serving optimizer who’s difference in plan will turn out to be decremental to A(2) while A(1) is certain that this is all just one big misunderstanding. The optimal plan devised under scenario 2 will now face opposition by A(2) although it would be in A(2)’s best interest to actually support it with its resources to maximize F(2) while A(1) will see A(2)’s objection as being detrimental to maximizing their shared utility function. Fairly interesting: based on lack of trust and differences in capability each agent perceives the other agent’s plan as being irrational from their respective points of view.

Under scenario 3, both agents now have a variety of strategies at their disposal:

  1. deny pooling of part or all of ones resources = If we do not do it my way you can do it alone.
  2. use resources to sabotage the other agent’s plan = I must stop him with these crazy ideas!
  3. deceive the other agent in order to skew how the other agent is deploying strategies 1 and 2
  4. spend resources to explain the plan to the other agent = Ok – let’s help him see the light
  5. spend resources on self improvement to understand the other agent’s plan better = Let’s have a closer look, the plan might not be so bad after all
  6. strike a compromise to ensure a higher level of pooled resources = If we don’t compromise we both loose out

Number 1 is a given under scenario 3. Number 2 is risky, particularly as it would cause a further reduction in trust on both sides if this strategy gets deployed assuming the other party would find out similarly with number 3. Number 4 seems like the way to go but may not always work particularly with large differences in C(i) among the agents. Number 5 is a likely strategy with a fairly high level of trust. Most likely however is strategy 6.

Striking a compromise is trust building in repeated encounters and thus promises less objection and thus higher total payoff the next times around.

Assuming the existence of an arguably optimal path leading to a maximally possible satisfaction of a given utility function anything else would be irrational. Actually such a maximally intelligent algorithm exists in the form of Hutter‘s universal algorithmic agent AIXI. The only problem being however that the execution of said algorithm requires infinite resources and is thus rather unpractical as every decision will always have to be made under resource constrains.

Consequentially every decision will be irrational to that degree that it differs from the unknowable optimal path that AIXI would produce. Throw in a lack of trust and varying levels of capability among the agents and all agents will always have to adopt their plans and strike a compromise based on the other agent’s relativistic irrationality independent of their capabilities in oder to minimize the other agents objection cost and thus maximizing their respective utility function.

Comments (5)

Jame5 arrived in meatspace

Jame5 paperbackYesterday I took delivery of 500 copies of Jame5 as 188 page paperback. The quality is good and I am happy with how the print turned out – nice. If you prefer the paperback over the PDF feel free to buy a copy – the content is identical. As to the price I will charge 29.99 Euro plus 3 Euro postage and packing to any destination worldwide. So for a grand total of 32.99 Euro you can own you very own first edition Jame5!

As to forms of payment I will accept bank transfer inside the European Union and PayPal from the rest of the world – no money orders, sorry. Feel free to drop me an email and I will give you the payment details. Include a desired dedication and I will be happy to oblige. Letting me know your shipping address should not hurt as well.

Many thanks!

Comments

Putting mind over matter

The evolution of cognition is the story of an ever accelerating fitness optimization process. A short introduction can be found in my paper on friendly AI theory and a longer explanation is provided in Valentin Turchin‘s book – The Phenomenon of Science.

Applying metasystem transition theory the evolution of cognition can be understood as having went through the following stages:

  • position
  • movement controls position
  • simple reflex controls movement
  • complex reflex controls simple reflexes
  • associated learning controls complex reflexes
  • imagination controls associated learning
  • conscious thought controls imagination
  • beliefs control conscious thoughts
  • charisma and science control beliefs

The roots of our animal urges – such as cravings for cheeseburger with fries – have probably evolved on the level of the complex reflex in a scarce caloric reality. So what is keeping (some of) us from constantly overindulging and satisfying this and other animal urges? It is of cause our realization that overeating – once necessary to prevent starvation should the next harvest not go so well – will not be worth the negative side effects in our post caloric scarcity society.

Our beliefs such as ‘overeating is bad for me’ are controlling our lower level complex reflexs such as ‘must eat good food’ and so we diet and exercise. That’s how evolution has put mind over matter – easy as pie.

Comments (2)

To be, or not to be, that is the question

“Whether ’tis nobler in the mind to suffer; The slings and arrows of outrageous fortune, or to take arms against a sea of troubles, and by opposing end them? To die: to sleep”
(from Hamlet 3/1)

Now I’m no literary critic. I could not help to be reminded however of this most famous snippet of Shakespearean writing when working on putting together a set of axiomatic beliefs on which the core belief of my friendly AI theory is founded on: that is good what increases fitness.

Inspired by a comment on famous geek site Slashdot.org I sat down to do the following:

  • write down a strongly held belief => “That is good what increases fitness.”
  • write down the set of “sub-beliefs” that I have which form the basis of my belief
  • iterate above steps, applying the same process to each belief listed

The result was very interesting. Soon I realized that the listed beliefs started to contradict each other so I had to think deeper and rewrite some of them. That lead to new insights and resulted in a set of 40 beliefs. Some of them are trivial and some of them are interesting. Most axiomatic however is the following belief:

1.2.3.2 To exist is preferable over not to exist

To be, or not to be, that is the question. Is that not the metaphorical question implicitly posed by reality on every living thing: ‘Can you exist?’

Over the course of evolution this question was first asked and answered passively on the chemical level and later actively ‘pondered’ on the cognitive level to avoid reality taking its toll. With the realization that what is good is what increases fitness one can start to actively as well as consciously look into developing strategies for ensured continued existence.

Averting the rise of a non-friendly AI then becomes but one of many existential risks.

Comments (7)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »