Relativistic irrationality

Imagine two agents A(i) each one with a utility function F(i), capability level C(i) and no knowledge as to the other agent’s F and C values. Both agents are given equal resources and are tasked with devising the most efficient and effective way to maximize their respective utility with said resources.

Scenario 1: Both agents have fairly similar utility functions F(1) = F(2), level of knowledge, cognitive complexity, experience – in short capability C(1) = C(2) – and a high level of mutual trust T(1->2) = T(2->1) = 1. They will quickly agree on the way forward, pool their resources and execute their joint plan. Rather boring.

Scenario 2: Again we assume F(1) = F(2), however C(1) > C(2) – again T(1->2) = T(2->1) = 1. The more capable agent will devise a plan, the less capable agent will provide its resources and execute the trusted plan. A bit more interesting.

Scenario 3: F(1) = F(2), C(1) > C(2) but this time T(1->2) = 1 and T(2->1) = 0.5 meaning the less powerful agent assumes with a probability of 50% that A(1) is in fact a self serving optimizer who’s difference in plan will turn out to be decremental to A(2) while A(1) is certain that this is all just one big misunderstanding. The optimal plan devised under scenario 2 will now face opposition by A(2) although it would be in A(2)’s best interest to actually support it with its resources to maximize F(2) while A(1) will see A(2)’s objection as being detrimental to maximizing their shared utility function. Fairly interesting: based on lack of trust and differences in capability each agent perceives the other agent’s plan as being irrational from their respective points of view.

Under scenario 3, both agents now have a variety of strategies at their disposal:

  1. deny pooling of part or all of ones resources = If we do not do it my way you can do it alone.
  2. use resources to sabotage the other agent’s plan = I must stop him with these crazy ideas!
  3. deceive the other agent in order to skew how the other agent is deploying strategies 1 and 2
  4. spend resources to explain the plan to the other agent = Ok – let’s help him see the light
  5. spend resources on self improvement to understand the other agent’s plan better = Let’s have a closer look, the plan might not be so bad after all
  6. strike a compromise to ensure a higher level of pooled resources = If we don’t compromise we both loose out

Number 1 is a given under scenario 3. Number 2 is risky, particularly as it would cause a further reduction in trust on both sides if this strategy gets deployed assuming the other party would find out similarly with number 3. Number 4 seems like the way to go but may not always work particularly with large differences in C(i) among the agents. Number 5 is a likely strategy with a fairly high level of trust. Most likely however is strategy 6.

Striking a compromise is trust building in repeated encounters and thus promises less objection and thus higher total payoff the next times around.

Assuming the existence of an arguably optimal path leading to a maximally possible satisfaction of a given utility function anything else would be irrational. Actually such a maximally intelligent algorithm exists in the form of Hutter‘s universal algorithmic agent AIXI. The only problem being however that the execution of said algorithm requires infinite resources and is thus rather unpractical as every decision will always have to be made under resource constrains.

Consequentially every decision will be irrational to that degree that it differs from the unknowable optimal path that AIXI would produce. Throw in a lack of trust and varying levels of capability among the agents and all agents will always have to adopt their plans and strike a compromise based on the other agent’s relativistic irrationality independent of their capabilities in oder to minimize the other agents objection cost and thus maximizing their respective utility function.

5 Comments »

  1. Jame5 » Absolute irrationality said,

    November 13, 2007 @ 9:05 pm

    […] « Relativistic irrationality 13 11 2007 […]

  2. Jame5 » Categorical imperative said,

    November 15, 2007 @ 2:18 am

    […] Kant did not provide such a maxim that would satisfy his imperative. Intuitively and based on previous rational analysis I shall assume the following maxim as basis for rational morality: Ensure […]

  3. Rational Morality » Respect as basis for interaction with other agents said,

    November 16, 2007 @ 2:32 am

    […] as large a threat by an irrational agent as any other irrational agent for always striking the most rational compromise in supporting and opposing the other agents’ […]

  4. Rational Morality » The Charter for Compassion said,

    November 12, 2009 @ 6:11 pm

    […] is also necessary in both public and private life to refrain consistently and empathically from inflicting pain. To act or speak violently out of spite, chauvinism, or self-interest, to impoverish, exploit or […]

  5. Rational Morality » The Bible read with Evolutionary Eyes said,

    August 14, 2013 @ 9:58 pm

    […] is ‘bad’. Extending the desire for continued existence to the other in general, yields a utility function that when objected to by any agent leads to the opposing agent’s own eventual yet inevitable demise while at the same time […]

RSS feed for comments on this post · TrackBack URI

Leave a Comment