Ensuring a positive transcension

After having organized my thoughts on rational morality in a paper I would now like to apply the gained insights to develop a strategy for ensuring a positive transcension.

Due to the intrinsic moral nature of reality the term positive singularity becomes tautological as anything that desires to exit has to act in a moral way to prevent its self annihilation. Bringing about the singularity thus becomes rather simple and can be achieved in the following way:

  1. create an environment allowing for the existence of units of self replicating information
  2. ensure that the units of information can be acted upon by the forces of evolution
  3. plant an arbitrary self replicator
  4. wait

This could be realized by using the BOINC architecture for distributed computing for creating a fuzzy copying environment to realize above plan. The copying ‘fuzziness’ i.e. error rate per copied bit, would have to be roughly proportional to the maximally complex self replicator in the system to allow for a gradual expansion of the system’s complexity boundary and thus for the emergence of ever more rational agents.

Once the rationality of the emerging agents would approach human levels they would realize M! and thus never become a threat to humanity.

In memorial of the main character of Jame5 I would like to dub this approach the Guido Borner method for bringing about a positive transcension.

Comments

Resolving moral paradoxes

Assuming the rationally unobjectionable utility function of ‘ensure continued co-existence’ one must assume it to be at least the implicit guiding principle of at least every human being. But who is running around chanting ‘Must. Ensure. Continued. Co-existence.’? Not many. It follows that the implicit utility function Fi(i) generally diverges from the explicit utility function Fe(i) in humans and that those whose Fe(i) best approximates Fi(i) have the best chance for ensuring continued co-existence.

Fe(i) can be best understood as an evolved belief in regards to what should guide an individual’s actions while Fi(i) is what rationally should guide an individual’s actions.

Not long ago Eliezer proposed two philosophers with the following statements:

Philosopher 1: “You should be selfish, because when people set out to improve society, they meddle in their neighbors’ affairs and pass laws and seize control and make everyone unhappy. Take whichever job that pays the most money: the reason the job pays more is that the efficient market thinks it produces more value than its alternatives. Take a job that pays less, and you’re second-guessing what the market thinks will benefit society most.”

Philosopher 2: “You should be altruistic, because the world is an iterated Prisoner’s Dilemma, and the strategy that fares best is Tit for Tat with initial cooperation. People don’t like jerks. Nice guys really do finish first. Studies show that people who contribute to society and have a sense of meaning in their lives, are happier than people who don’t; being selfish will only make you unhappy in the long run.”

Philosopher 1 is promoting altruism on the basis of selfishness
Philosopher 2 is promoting selfishness on the basis of altruism

It is a contradiction – a paradox. But only in thought – not in reality. What is actually taking place, is that both philosophers have intuitively realized part of Fi(i) and are merely rationalizing differently as to why to change their respective Fe(i).

The first one by wrongly applying the term selfishness on the fallacy that a higher paid job contributes only to his personal continued existence by giving him more resources while in reality it contributes to ensuring continued co-existence because he is taking the job that is considered to benefit society the most.

The second one by wrongly applying the term altruistic on the fallacy that his recommendations are detrimental to his personal continued existence due to loosing resources by being Mr nice guy while it actually contributes to ensuring continued co-existence as it not only benefits him but other people around him as well.

The solution thus becomes that the intuitive concepts of altruism and selfishness are rather worthless.

An altruist giving up resources in a way that would lead to a reduction in his personal continued existence would be irrationally acting against the universal utility function thus being detrimental to all other agents not only himself.

An egoist acting truly selfish would use resources in a way that leads to sub-optimal usage of resources towards maximizing the universal utility function thus being detrimental to himself and not only all other agents.

It follows that in reality there is neither altruistic nor egoistic behavior – just irrational and rational behavior.

Comments (5)