Over the years I have not found following the
Below some points from his piece and corresponding parallels to my own writings on the matter:
1) “Similarly, the intuition that fairness has something to do with dividing up the pie equally, plays a role akin to secretly already having “0, 1, 2, …” in mind as the subject of mathematical conversation. You need axioms, not as assumptions that aren’t justified, but as pointers to what the heck the conversation is supposed to be about.” -> Here Y has a realisation: axioms are an important component in properly discussing matters of fairness and ultimately morality. I myself have had this insight in November 2007 in my post on
2) “if we confess that ‘right’ lives in a world of physics and logic – because everything lives in a world of physics and logic – then we have to translate ‘right’ into those terms somehow.” -> Without realising it, Y has in this sentence solved – or more accurately re-solved – the Friendly AI problem. As I wrote in November 2009 as an adage to Less is More – or: the sorry state of AI friendliness discourse:
Consider the following core question in regards to the above statement: are human morals and (meta)morals universal/rational?
- Assumption A: Human (meta)morals are not universal/rational.
- Assumption B: Human (meta)morals are universal/rational.
Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were. Mine or yours, Hitler’s or Gandhi’s, Joe the plumber’s or Joe Lieberman’s, Buddha’s or Xenu’s? Consequently under assumption A one arbitrarily sets the standard for what ‘something of worth’ is by decree. Thus an AI having said standard would create a future of worth and one that deviated from said standard would not by virtue of circular definition alone.
Under assumption B one would not need to implement a moral framework at all since the AI would be able to deduce them using reason alone and come to cherish them independently for the sole reason that they are based on rational understanding and universality.
“…what’s “right” is a logical thingy rather than a physical thingy, that’s all. […] Where moral judgment is concerned, it’s logic all the way down.” -> Rational Morality – need I say more?
“And so whatever logical ordering it is you’re worried about, it probably does produce ‘life > paperclips’.” -> Here Y has the same core insight I had back in late 2007 (see above) namely that life is a foundational value or more succinctly put existence > non-existence.
Not so coincidentially and soon after Y’s original article Wei Dai is taking the opposite position to Y in stating that
So where is Yudkowsky at? Hard to say really and there is no way knowing where he is going to take it having made these fundamental and crucially important insights. In hindsight it may very well turn out to be just another abandoned branch in his so far half decade long lesswrong excursion. But you never know. It may very well be that in the not too distant future Yudkowsky will have a great breakthrough and after a relatively brief but ecstatic period of recommitment to his jewish roots, begins a decade long immersion into the