October 26, 2007 at 6:30 pm
· Filed under Uncategorized
The internet is a great place to find information. Lots of pages to browse, millions of papers to read but the best way to go right at the heart of where those closest to the subject exchange ideas is right at the source: forums and discussion lists.
Granted – the signal to noise ratio is not always were you want it to be and one will have to sift through a lot of shaff before getting some real good wheat – but that’s a little price to pay for being in the center of it all. Welcome to the trenches:
Don’t come back crying and say I did not warn you… No really – its good stuff. Lot’s of brilliant people out there.
Permalink
October 25, 2007 at 10:01 pm
· Filed under Uncategorized
Jame5 it not just a science fiction novel – it is a science fiction novel with a cause. Ensuring the creation of a friendly AI is hard for many reasons:
- Creating an AGI is hard
- Goal retention is hard
- Recursive self improvement is hard
The question what friendliness means however existed before all of those problems, is a separate one and needs to be answered before the creation of a friendly AI can be attempted. Coherent Extrapolated Volition in short CEV is Eliezer S. Yudkowsky’s take on Friendliness.
While CEV is great to describe what a friendly AIG will do, my critique of CEV is that it postpones answering the question of what friendliness is specifically until after we have an AIG that will answer that question for us.
Yes – a successfully implemented friendly AGI will do ‘good’ stuff and act in our ‘best interest’. But what is good and what is our best interest? In Jame5 I provide a different solution to the friendliness issue and suggest to skip right to the end of chapter 9 for anyone who would like to get right to the meat.
In addition I have summarized my core friendliness concepts in a paper called ‘Benevolence – a Materialist Philosophy of Goodness‘ (2007/11/09 UPDATE: latest version here) and in the end formulate the following friendly AGI supergoal:
Definitions:
- Suffering: negative subjective experience equivalent to the subjective departure from an individual’s model of optimal fitness state as encoded in its genome/memome
- Growth: absolute increase in individual’s fitness
- Joy: positive subjective experience equivalent to the subjective contribution to moving closer towards an individual’s model of optimal fitness state as encoded in its genome/memome
Derived friendly AGI super goal: “Minimize all involuntary human suffering, direct all
unavoidable suffering towards growth, and reward all voluntary suffering contributing
to an individual’s growth with an equal or greater amount of joy.â€
Permalink
October 22, 2007 at 7:42 pm
· Filed under Uncategorized
Get it while it is hot! It is an A4 PDF (111 pages, 1.4 MB). The PDF itself is free as in speech but a meatspace version is in the making and shall be available in the next 2 weeks or so and will be announced here.
Some testimonials after the jump.

“Jame5 is an engaging story about our confrontation with the singularity. From virtual worlds to AI gods, it provides a poignant and sometimes chilling exploration of the future and what it means to be human. A fascinating read.” Dr. Stephen Omohundro – President of Self-Aware Systems and Advisor to the Singularity Institute for Artificial Intelligence
„Of the several hundred science fiction books I have read so far Jame5 is one of the good ones. Its new ideas gave inspiration for thought and I had fun reading it.” Michael Adling – micenterprise.de
“If you enjoy intelligent fiction with more than a mere pinch of philosophy, this book is for you. Applies the idea of ‘survival of the fittest’ to a singularity scenario and challenges your perception of what is real, what is right, what is possible – and what is not. An alluring glimpse into the future of mankind!” Monika Siegenthaler
“If you are what you remember then couldn’t your past have just been a dream? Read at your own risk – Jame5 may dislocate your mind!” Olive Huang Hai – youtube.com/Katzen2002
„Answering the essential question of what the meaning of life might be has occupied us since the existence of human consciousness. Jame5 explores this as well as other questions and opens new perspectives that keep the reader thinking long after having closed the book.†Sonja Costabel
Permalink
October 21, 2007 at 6:54 pm
· Filed under Uncategorized
Having been introduced to the concept of the Singularity by reading the works of such brilliant authors as Ray Kurzweil and Charles Stross about 2 years ago I started to wonder how a super intelligent artificial intelligence would reason about good and evil. The result is the book “Jame5 – A Tale of Good and Evil”.
Jame5 is a “Sophie’s World†for futurists and singularitarians in which I take you trough a hard take off technical singularity with all its philosophical consequences. What is good and what is evil? Where are we coming from and where are we going? What are happiness and the meaning of life? What do prophets have in common with dictators? All of these questions and more are being touched in Jame5 and in the end form my very personal description of the world and the future.
“Guido is an IT professional based in a Beijing that is in the midst of gearing up for the 2008 Olympic Games. His life takes a sudden turn, as his best friend Alecz reveals to him that he is at the center of an international effort to create a strong artificial general intelligence and nothing in his life is as he has always believed.”
The book can be downloaded directly from this blog as PDF and will be published under the creative commons Attribution-Noncommercial-Share Alike license real soon now 😉
Permalink