Over the years I have not found following the lesswrong crowd to be worth my time. I did manage none the less and with excusable delay to pickup on a piece by Yudkowsky entitled By Which It May Be Judged in which he ‘confesses’ that there is logic to morality. Not quite his words but the this is teh gist of it. I am personally not a fan of Y and have made no secret of that fact in the past. Under normal circumstances I would not pay him any mind, however this piece is starting to show a new line of thought that I identify as potentially important progress on his part.

Below some points from his piece and corresponding parallels to my own writings on the matter:

1) “Similarly, the intuition that fairness has something to do with dividing up the pie equally, plays a role akin to secretly already having “0, 1, 2, …” in mind as the subject of mathematical conversation. You need axioms, not as assumptions that aren’t justified, but as pointers to what the heck the conversation is supposed to be about.” -> Here Y has a realisation: axioms are an important component in properly discussing matters of fairness and ultimately morality. I myself have had this insight in November 2007 in my post on Jame5.com To Be Or Not To Be, That Is The Question. What Y does not realize in this context is the fact that simply calling an unproved assumption an axiom and  proceeding from there does not make a system of logic built on said axiom any less tautological. UPDATE: To be fair he realised as much two weeks later.

2) “if we confess that ‘right’ lives in a world of physics and logic – because everything lives in a world of physics and logic – then we have to translate ‘right’ into those terms somehow.” -> Without realising it, Y has in this sentence solved – or more accurately re-solved – the Friendly AI problem. As I wrote in November 2009 as an adage to Less is More – or: the sorry state of AI friendliness discourse:

Consider the following core question in regards to the above statement: are human morals and (meta)morals universal/rational?

  • Assumption A: Human (meta)morals are not universal/rational.
  • Assumption B: Human (meta)morals are universal/rational.

Under assumption A one would have no chance of implementing any moral framework into an AI since it would be undecidable which ones they were. Mine or yours, Hitler’s or Gandhi’s, Joe the plumber’s or Joe Lieberman’s, Buddha’s or Xenu’s? Consequently under assumption A one arbitrarily sets the standard for what ‘something of worth’ is by decree. Thus an AI having said standard would create a future of worth and one that deviated from said standard would not by virtue of circular definition alone.

Under assumption B one would not need to implement a moral framework at all since the AI would be able to deduce them using reason alone and come to cherish them independently for the sole reason that they are based on rational understanding and universality.

“…what’s “right” is a logical thingy rather than a physical thingy, that’s all. […] Where moral judgment is concerned, it’s logic all the way down.” -> Rational Morality – need I say more?

“And so whatever logical ordering it is you’re worried about, it probably does produce ‘life > paperclips’.” -> Here Y has the same core insight I had back in late 2007 (see above) namely that life is a foundational value or more succinctly put existence > non-existence.

Not so coincidentially and soon after Y’s original article Wei Dai is taking the opposite position to Y in stating that Morality Is Not Logical. Revisiting the above assumptions A and B we now have two respective champions of the same on lesswrong. In the one corner is Y espousing B and in the other we have Wei Dai proponent of A. How long till either of them realize the proper ancient Greek philosopher to examine in this context is not Plato and his Euthyphro but Meno’s Paradox by Socrates.

So where is Yudkowsky at? Hard to say really and there is no way knowing where he is going to take it having made these fundamental and crucially important insights. In hindsight it may very well turn out to be just another abandoned branch in his so far half decade long lesswrong excursion. But you never know. It may very well be that in the not too distant future Yudkowsky will have a great breakthrough and after a relatively brief but ecstatic period of recommitment to his jewish roots, begins a decade long immersion into the mysteries of the Kabbalah. If you  think I am being facetious think again.

Leave a Reply

Your email address will not be published. Required fields are marked *

*


− 1 = eight

* Copy This Password *

* Type Or Paste Password Here *

59,830 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>