Ensuring a positive transcension

After having organized my thoughts on rational morality in a paper I would now like to apply the gained insights to develop a strategy for ensuring a positive transcension.

Due to the intrinsic moral nature of reality the term positive singularity becomes tautological as anything that desires to exit has to act in a moral way to prevent its self annihilation. Bringing about the singularity thus becomes rather simple and can be achieved in the following way:

  1. create an environment allowing for the existence of units of self replicating information
  2. ensure that the units of information can be acted upon by the forces of evolution
  3. plant an arbitrary self replicator
  4. wait

This could be realized by using the BOINC architecture for distributed computing for creating a fuzzy copying environment to realize above plan. The copying ‘fuzziness’ i.e. error rate per copied bit, would have to be roughly proportional to the maximally complex self replicator in the system to allow for a gradual expansion of the system’s complexity boundary and thus for the emergence of ever more rational agents.

Once the rationality of the emerging agents would approach human levels they would realize M! and thus never become a threat to humanity.

In memorial of the main character of Jame5 I would like to dub this approach the Guido Borner method for bringing about a positive transcension.

Comments

Categorical imperative

8 hours of research later and sooner than I thought I am getting closer to the solution. Turns out that Immanuel Kant had some deep insights into morality in his time. Consider his rather famous categorical imperative:

“Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

That is what a rational morality would have to be grounded in. Unfortunately however, Kant did not provide such a maxim that would satisfy his imperative. Intuitively and based on previous rational analysis I shall assume the following maxim as basis for rational morality:

Ensure continued co-existence

Exploring the concept of rational morality further would clearly break the intended scope of Jame5 and thus I will continue my exploration of the issue on rationalmorality.info

Hope to see you there. This blog will remain active for Jame5 related updates.

Comments (5)

Rational morality

Having had the opportunity to sleep over my previous post on resolving moral paradoxes I woke up attempting to find a best fit description for the concept that started to build in my mind on how to rationally decide ethical questions. The result was ‘rational morality’ – intuitively a contradiction but Google turned up quite a bit on the subject and so did Amazon.

Intuitively it seems to be matching my thoughts closely and so I will have to submerge myself into what all those before me have thought on the subject before making an attempt to develop the concept further. Until then it will become more quite around here.

Looking forward to be back soon.

Comments

xkcd on beliefs

xkcd on beliefs

creative commons 2.5

 

This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. This means you’re free to copy and share these comics (but not to sell them). More details.

Comments

Resolving moral paradoxes

Assuming the rationally unobjectionable utility function of ‘ensure continued co-existence’ one must assume it to be at least the implicit guiding principle of at least every human being. But who is running around chanting ‘Must. Ensure. Continued. Co-existence.’? Not many. It follows that the implicit utility function Fi(i) generally diverges from the explicit utility function Fe(i) in humans and that those whose Fe(i) best approximates Fi(i) have the best chance for ensuring continued co-existence.

Fe(i) can be best understood as an evolved belief in regards to what should guide an individual’s actions while Fi(i) is what rationally should guide an individual’s actions.

Not long ago Eliezer proposed two philosophers with the following statements:

Philosopher 1: “You should be selfish, because when people set out to improve society, they meddle in their neighbors’ affairs and pass laws and seize control and make everyone unhappy. Take whichever job that pays the most money: the reason the job pays more is that the efficient market thinks it produces more value than its alternatives. Take a job that pays less, and you’re second-guessing what the market thinks will benefit society most.”

Philosopher 2: “You should be altruistic, because the world is an iterated Prisoner’s Dilemma, and the strategy that fares best is Tit for Tat with initial cooperation. People don’t like jerks. Nice guys really do finish first. Studies show that people who contribute to society and have a sense of meaning in their lives, are happier than people who don’t; being selfish will only make you unhappy in the long run.”

Philosopher 1 is promoting altruism on the basis of selfishness
Philosopher 2 is promoting selfishness on the basis of altruism

It is a contradiction – a paradox. But only in thought – not in reality. What is actually taking place, is that both philosophers have intuitively realized part of Fi(i) and are merely rationalizing differently as to why to change their respective Fe(i).

The first one by wrongly applying the term selfishness on the fallacy that a higher paid job contributes only to his personal continued existence by giving him more resources while in reality it contributes to ensuring continued co-existence because he is taking the job that is considered to benefit society the most.

The second one by wrongly applying the term altruistic on the fallacy that his recommendations are detrimental to his personal continued existence due to loosing resources by being Mr nice guy while it actually contributes to ensuring continued co-existence as it not only benefits him but other people around him as well.

The solution thus becomes that the intuitive concepts of altruism and selfishness are rather worthless.

An altruist giving up resources in a way that would lead to a reduction in his personal continued existence would be irrationally acting against the universal utility function thus being detrimental to all other agents not only himself.

An egoist acting truly selfish would use resources in a way that leads to sub-optimal usage of resources towards maximizing the universal utility function thus being detrimental to himself and not only all other agents.

It follows that in reality there is neither altruistic nor egoistic behavior – just irrational and rational behavior.

Comments (5)

« Previous Page« Previous entries « Previous Page · Next Page » Next entries »Next Page »