On benevolence and friendly AI theory

Jame5 it not just a science fiction novel – it is a science fiction novel with a cause. Ensuring the creation of a friendly AI is hard for many reasons:

  • Creating an AGI is hard
  • Goal retention is hard
  • Recursive self improvement is hard

The question what friendliness means however existed before all of those problems, is a separate one and needs to be answered before the creation of a friendly AI can be attempted. Coherent Extrapolated Volition in short CEV is Eliezer S. Yudkowsky’s take on Friendliness.

While CEV is great to describe what a friendly AIG will do, my critique of CEV is that it postpones answering the question of what friendliness is specifically until after we have an AIG that will answer that question for us.

Yes – a successfully implemented friendly AGI will do ‘good’ stuff and act in our ‘best interest’. But what is good and what is our best interest? In Jame5 I provide a different solution to the friendliness issue and suggest to skip right to the end of chapter 9 for anyone who would like to get right to the meat.

In addition I have summarized my core friendliness concepts in a paper called ‘Benevolence – a Materialist Philosophy of Goodness‘ (2007/11/09 UPDATE: latest version here) and in the end formulate the following friendly AGI supergoal:

Definitions:

  • Suffering: negative subjective experience equivalent to the subjective departure from an individual’s model of optimal fitness state as encoded in its genome/memome
  • Growth: absolute increase in individual’s fitness
  • Joy: positive subjective experience equivalent to the subjective contribution to moving closer towards an individual’s model of optimal fitness state as encoded in its genome/memome

Derived friendly AGI super goal: “Minimize all involuntary human suffering, direct all
unavoidable suffering towards growth, and reward all voluntary suffering contributing
to an individual’s growth with an equal or greater amount of joy.”

3 Comments »

  1. Jame5 » Self improvement versus non-eudaemonic dystopias said,

    November 6, 2007 @ 11:48 pm

    […] the context of my friendly AI theory I suggest a similar approach to Bostrom’s Singleton however honoring Ben Goertzel’s […]

  2. Jame5 » Estimating cognitive evolution’s complexity boundary in humans said,

    November 7, 2007 @ 7:21 pm

    […] As basis I will assume that: 1) cognitive evolution in humans is taking place on the level of beliefs (a brief summary can be found in my paper on friendly AI […]

  3. Jame5 » Understanding inter group competion in humans said,

    November 8, 2007 @ 8:07 pm

    […] or perish. For a quick introduction to my thoughts on this issue I suggest reading my paper on friendly AI theory or Jame5 pages 69 and […]

RSS feed for comments on this post · TrackBack URI

Leave a Comment