Some background: the advent of greater than human artificial intelligence is hailed by insiders as the crucial event of the 21st century and is widely expected by the experts to define humanity’s future. This event is generally dubbed The Singularity – with minor variations on the details in what that concept actually means. I personally expect this meme to hit the broad mainstream sometime over the next 18 months.
And this is where the problem lies: the Singularity Institute for Artificial Intelligence (SIAI) is the only organization dedicated to “[…] confront this urgent challenge, both the opportunity and the risk.” and within the SIAI there is one person – Eliezer Yudkowsky – who is dominating AI friendliness (FAI) discourse. What is so problematic about this state of affairs is threefold – two aspects of which I have previously covered here on this blog:
The current state of discourse on the topic is highly irrational.
#3 is the topic of this post.
In late 2007 Yudkowsky started to go on a writing spree over at the Overcomming Bias blog in which he has since written well over 600 articles. By March of 2009 this has gone so far that in his own words:
“The Singularity Institute and the Future of Humanity Institute are beta’ing a new site devoted to refining the art of human rationality, LessWrong.com. LessWrong will end up as the future home of EliezerYudkowsky’s massive repository of essays previously written on Overcoming Bias” (emphasize mine)
“I figured out something that is hard to figure out. Figuring out or understanding the right answers requires rationality. Therefore let’s set up a mass movement to train people to be black-belt rationalalists so that they can reach these conclusions too.”
Needless to say that such grandeur was met with anticipatory skepticism best summed up by one commentator in stating that “A proper meme would spread without ego identity or association.” Hear hear. But let us not condemn before seeing the evidence.
“If I had to pick a single statement that relies on more Overcoming Bias content I’ve written than any other, that statement would be: Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.”
This blog post – lauded by SIAI Media Director Michael Anissimov – and an ensuing mild bout with British philosopher David Pearce caused me to take notice and write my two rebuttals to CEV and the paper clip argument mentioned above. However, the problem goes much deeper. Instead of presenting an argument – in 200 words or less as they say – in support of the above claim I was advised by the blog owner to immerse myself in a tsunami of writings as following in order to advance my understanding:
Please realize that these 5 links alone constitute a good 10’000 words; not of text but references to over 100 other articles for me to study. Needless to say that I was not inclined to read even a single word of this yet was at the same time wondering why I was declined a simple consistent and concise argument. Instead of leaving it at that I decided to analyze the statement from another perspective.
Consider the following core question in regards to the above statement: are human morals and (meta)morals universal/rational?
Assumption A: Human (meta)morals are not universal/rational.
Assumption B: Human (meta)morals are universal/rational.
Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were. Mine or yours, Hitler’s or Gandhi’s, Joe the plumber’s or Joe Lieberman’s, Buddha’s or Xenu’s? Consequently under assumption A one arbitrarily sets the standard for what ‘something of worth’ is by decree. Thus an AI having said standard would create a future of worth and one that deviated from said standard would not by virtue of circular definition alone.
Under assumption B one would not need to implement a moral framework at all since the AI would be able to deduce them using reason alone and come to cherish them independently for the sole reason that they are based on rational understanding and universality.
UPDATE: Turns out that this line of reasoning is not dissimilar from that used by Socrates in formulating Meno’s Paradox: “[A] man cannot search either for what he knows or for what he does not know[.] He cannot search for what he knows–since he knows it, there is no need to search–nor for what he does not know, for he does not know what to look for.” (80e, Grube translation)
No matter how you look at the above statement regarding AIs inheriting our morals or metamorals, it is simply nonsense. Since under A it would be impossible/tautological and under B it would be unnecessary/self contradicting because morals would be self evident to a transhuman AI.
Moral relativists need to understand that they can not eat the cake and keep it too. If you claim that values are relative, yet at the same time argue for any particular set of values to be implemented in a super rational AI you would have to concede that this set of values – just as any other set of values according to your own relativism – is utterly whimsical, and that being the case, what reason (you being the great rationalist, remember?) do you have to want them to be implemented in the first place? Now, if you happen to believe you have a very good reason for a particular set of values over any other, then on what grounds would you be justified to believe that any transhuman AI – bound by reason and logic – would not have to agree with you on them?
Open your eyes people, it is not that the suit of clothes is invisible only to those unfit for their positions – no – the emperor has no cloths!
And thus a few closing remarks:
To Yudkowsky: less evocative prose and nested self referential linking
To the Bayesian rationalists: make sure you are being truly rational not rationalizing
To everyone: linking to 100 articles is not an acceptable substitute for a good argument
To the SIAI: update your FAI material so that it can be presented to interested parties in a concise (i.e. 5’000 words or less plus 250 word executive summary) document without sending people on a wild goose chase around lesswrong.com