Roughly a decade ago I got a hold of Ray Kurzweil’s landmark book The Singularity Is Near. Ray’s writings inspired me to develop my own thoughts on the matter of friendly AI which I put to paper in my 2007 released book Jame5 – A Tale Of Good And Evil in which I developed my early take on the subject based on evolutionary philosophy. Realizing that my ideas will only ever have a chance of gaining wide spread traction when basing them on a solid foundation rooted in contemporary academic thought, my new wife and I went to Melbourne, Australia on a student visa in early 2010. I spent the next two semesters taking a Graduate Diploma in Anthropology and Social Theory with a special focus on the anthropology of religion after having been accepted at the University of Melbourne there. The entire time I kept focused on integrating what I learned there with my own ideas which helped identify a suitable research avenue towards my eventual PhD which I intent to get at some point in the hopefully not too distant future.

In the meantime I have continued to develop and integrate my ideas on friendly AI. The recent release of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies has triggered a number of high profile individuals, chiefly among them Elon Musk, Stephen Hawking and most recently Steve Wozniak to chime in on the friendly AI debate:

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” – Elon Musk (source)

“Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate,” Professor Hawking said. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superceded,” – Stephen Hawking (source)

“Computers are going to take over from humans, no question,” Wozniak told the Australian Financial Review in an interview about the Apple Watch and self-driving cars.“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” he says. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.” – Steve Wozniak (source)

Musk later put his money where his mouth is and donated $10,000,000 to the Future Of Life Institute to ‘keep the daemon in the box’ so to speak.

In addition to causing more than just a few raised eyebrows in the community, it prodded me into putting some serious time into brushing up my thoughts on the mater, which are needless to say diametrically opposed to the opinions of not only Musk and Hawking but of course Yudkowsky. Having done so, my paper has since been accepted for a 20 minute presentation by Ben Goertzel at the 8th Conference on Artificial General Intelligence in Berlin from July 22-25, 2015:

Abstract. The matter of friendly AI theory has so far almost exclusively been examined from a perspective of careful design while emergent phenomena in super intelligent machines have been interpreted as either harmful or outright dystopian. The argument developed in this paper highlights that the concept of ‘friendly AI’ is either a tautology or an oxymoron depending on whether one assumes a morally real universe or not. Assuming the former, more intelligent agents would by definition be more ethical since they could ever more deeply uncover ethical truths through reason and act in accordance with them while assuming the latter, reasoning about matters of right and wrong would be impossible since the very foundation of morality and therefore friendliness would be illogical. Based on evolutionary philosophy, this paper develops an in depth argument that supports the moral realist perspective and not only demonstrates its application to friendly AI theory – irrespective of an AI’s original utility function – making AGI inherently safe, but also its suitability as a foundation for a transhuman philosophy. (full paper)

I am very much looking forward to the conference and hoping that my paper will spark some badly needed debate on the subject.

Leave a Reply

Your email address will not be published. Required fields are marked *

*


× 1 = five

* Copy This Password *

* Type Or Paste Password Here *

59,296 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>