It is a long-standing trend in futurists circles to paint the future as bleak and as dangerous as possible with only a handful of elite ‘rationalists’ able to even understand, let alone adequately address the problem. In this tradition there exist a number of more or less well-known, more or less scary as well as more or less publicised concepts that all have a number of characteristics in common. They rest on a set of premises that when thought through to their final conclusion appear to lead to bizarre/horrific concepts of reality yet when examined with a cool head dissolve into what they really are: scaremongering hokum. In the context of this article I will shed some light on the erronious assumptions and lapses in logic that led to several of the more prominent futuristic boogeymen.

The Doomsday Argument – More like the Transcension Argument

The Doomsday Argument goes something like this:

“The Doomsday argument (DA) is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing the humans alive today are in a random place in the whole human history timeline, chances are we are about halfway through it.”

Doing the math based on these assumptions and the idea that so far about 60 Billion humans have existed in total over the course of all of human history, an average lifespan of 80 years and a world population stabilizing at 10 Billion individuals would result in human extinction in 9,120 years with 95% mathematical certainty. Applying Nick Bostrom’s self-sampeling assumption to the argument half’s this time horizon again to 4,560 years.  So far so grim.

There are a number of rebuttals to the DA however the most optimistic and positive one seems to so far not have been covered and requires the critical scrutiny of the reference class which in the standard DA is that of ‘humans’. As futurists we constantly talk about posthumans, transhumans, humanity+ and so on while oftentimes forgetting that humans are essentially postapes, transapes or apes+. This evolutionary perspective lets us understand the human condition as a transitory state within a long chain of previous states of existence reaching back into the past over the course of evolution all the way to the beginning of life itself. From this perspective it is more reasonable to define the reference class as the timeframe in which posthuman ancestors existed which is the time span from the beginning of life on earth until today giving us roughly 3.6 Billion years.

Applying this number to the DA yields a 95% chance that we will continue on the evolutionary trajectory for at least another 180 Million years and a 50% chance that we will do so for another 3.6 Billion years. The 95% certainty of the eventual extinction of our progeny’s progeny on the other hand lies in the distant future of the next 72 Billion years under these assumptions. A long time horizon indeed. But not only that. From this vantage point the Doomsday Argument becomes the Transcension Argument (TA) from which we can deduce with 95% probability that we will have realized our posthuman ambition within roughly the next 9,120 years or within 4,560 years given Bostrom’s self-sampeling assumption.

The Simulation Hypothesis – No, They Wont just Switch us Off

The Simulation Hypothesis (SH) is another futuristic boogeyman. This is how it is formulated:

“A technologically mature “posthuman” civilization would have enormous computing power. Based on this empirical fact, the simulation argument shows that at least one of the following propositions is true:

  1. The fraction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

The nature of the SH becomes scary when one begins to imagine that the simulation may be turned of at the end of the experiment or that none of our experiences are ‘real’. I will be addressing the three points above one by one and as it turns out the insights gained from an understanding of the DA earlier have significant bearing on the scariness of the SH.

Re 1) Well – maybe. But given the details of my TA above this is far from certain.
Re 2) Nothing to worry about here.
Re 3) So what? Let me explain in a bit more detail below.

First of all, assuming that one can not tell the difference between the simulation and ‘real’ reality, the only rational choice at that point would be to stop worrying and to simply carry on. But then there is still the risk of being switched off at some point. Assuming that those running the ancestor simulation would only be interested in the ‘human’ level part of their ancestral history then based on the DA above there would be a 50% chance of running the simulation for another 480 subjective years before we transcend/go extinct. But even then – who is to say that the simulation is going to be switched off at some point anyway? This would imply that our ancestors running these simulations had no consideration at all for the plight of hundreds of billions of iterations of human level conciousness or that they lack the resources to sustain those simulations for a very long time both of which I find utterly implausible.

We have every reason to believe that posthuman intelligences are far more compassionate, enlightened and caring beings that we are today. Instead of simply switching off their ancestral simulations they would likely plan for a post simulation virtual ‘heaven’ for all conscious beings in that simulation or allow for the simulation to run its course until it merges with their main branch of consciousness. The computational resources to allow for this would by that time be absolutely abundant as Kurzweil explains in great detail in The Singularity is Near discussing the limits of Nanocomputing:

“If we use the figure of 10^16 cps that I believe will be sufficient for functional emulation of human intelligence, the ultimate laptop [1kg mass in 1 liter volume] would function at the equivalent brain power of five trillion trillion human civilizations. Such a laptop could perform the equivalent of all human thought over the last ten thousand years (that is, ten billion human brains operating for ten thousand years) in one ten-thousands of a nanosecond.” Ray Kurzweil, The Singularity is Near, p. 134, ISBN 0-14-303788-9

Incidentally 10 billion humans over 10,000 years is within the margin of error of our DA assumptions of 95% probability of extinction/transcendence for 10 Billion humans over 9,120 years covered earlier. In other words – the computational resources needed for an ancestor simulation on the human scale would be too cheap to meter. The same would be true even if the full capacity of Kurzweil’s ultimate laptop would be realized by only 0.01% increasing the time required for simulating 10,000 years of consciousness in 10 billion human to a whooping nanosecond. At the same time the ethical implications of simply switching an ancestral simulation off is so great that there would be no reason at all not to continue the simulation until an eventual merger with ‘real’ reality. After all, posthuman civilizations at scales sufficient to run ancestor simulations would have left meatspace long ago anyway.

The Great Filter – or why ET Does Not Conform to Your Imagination

The Great Filter is another one of those worrisome perspective on our place and future development in the universe:

“The Great Filter, in the context of the Fermi paradox, is whatever prevents “dead matter” from giving rise, in time, to “expanding lasting life”. The concept originates in Robin Hanson‘s argument that the failure to find any extraterrestrial civilizations in the observable universe implies the possibility something is wrong with one or more of the arguments from various scientific disciplines that the appearance of advanced intelligent life is probable; this observation is conceptualized in terms of a “Great Filter” which acts to reduce the great number of sites where intelligent life might arise to the tiny number of intelligent species actually observed (currently just one: human).”

Or put another way: the Fermi paradox implies that we will go extinct before reaching for the stars since after all apparently so has everyone else since we can not see them. What assumption does this conclusion ultimately rest on? The idea that we have locked the right way at the right place for other intelligent life. However, there is a rather smart way of solving the Fermi paradox that makes The Great Filter a rather redundant concept.

Meet STEM compression, advocated by futurist and scholar of accelerating change John Smart. STEM compression is:

“[…] the idea that the most (ostensibly) complex of the universe’s extant systems at any time (galaxies, stars, habitable planets, living systems, and now technological systems) use progressively less space, time, energy and matter (“STEM”) to create the next level of complexity in their evolutionary development. A similar perspective is found in Buckminster Fuller’s writings on ephemeralization. In what he calls the “developmental singularity hypothesis”, Smart proposes that STEM compression, as a driver of accelerating change, must lead cosmic intelligence to a future of highly-miniaturized, accelerated, and local “transcension” to extra-universal domains, rather than to space-faring expansion within our existing universe. The transcension scenario (vs. expansion scenario) proposes that once civilizations saturate their local region of space with their intelligence, they need to leave our visible, macroscopic universe in order to continue exponential growth of complexity and intelligence, and thus disappear from this universe, thus explaining the Fermi Paradox.”

Well, there you go. A perfectly reasonable alternative explanation without the need to hypothesize about scary Great Filters.

Gigadeath – Seriously, Just Stop it Already

In his 2005 The Artilect War, Hugo De Garis outlines an argument for a bitter controversy in the near future between terrans – opposed to building ‘godlike massively intelligent machines’ – and the cosmists who are in favor. In De Garis’ view a war causing billions of deaths, hence ‘gigadeath’, will become inevitable in the struggle following the unsuccessful resolution of the ‘shall we build AI gods’ controversy. De Garis argues for an exponential increase in the casualties of war over the course of human history and comes to his conclusion by extrapolating that trend into the future.

What De Garis fails to realize in his gigadeath prognosis is the fact that while the number of casualties of war has in fact risen over the course of history, the number of human beings on the planet has risen even faster over the same time period resulting in a proportionally ever smaller share of casualties compared with total population as a result of said conflicts. This trend is of course brilliantly quantified and discussed in Steven Pinkert‘s The Better Angels of Our Nature – Why Violence has Declined. Aside from that I always thought that the cosmists would be so advanced by that time that the conflict would boil down to some angry fist shaking and name calling on the side of the terrans anyway. Given the possibility for a hard take off the entire discussion would be moot as well since it would all be over before it really begins.

Unfriendly AI – A Contradiction in Terms

I have addressed this before but allow me to reiterate here. So there is this entire movement of researchers out there concerning themselves with the absolute horrific idea of a transhuman AI that instead of being ‘friendly’ turns out to be a real party pooper and converts the entire universe into paperclips or place an infinite number of dust motes in eyes depending on who you ask.  The sheer horror of the idea boggles the mind! Except it doesn’t.

Yes sure – there is a true risk in creating dumb AI that blindly causes harm and destruction. A transhuman AI however is a completely different cup of tea. The emphasis here lies on ‘transhumanly’ intelligent, or in other words smarter – in every way – than you or I or any human being ever alive. The question of the validity of the concept of an ‘unfriendly’ AI boils down to a very simple question:

Does the universe exhibit moral realism or not?

A gentle reminder:

“Moral realism is the meta-ethical view which claims that:

  1. Ethical sentences express propositions.
  2. Some such propositions are true.
  3. Those propositions are made true by objective features of the world, independent of subjective opinion.”

From here we can make two assumptions:

A: Yes – the universe exhibits moral realism
B: No – the universe does not exhibit moral realism

If ‘A’ is true then a transhuman AI would reason itself into the proper goal system and through the power of reason alone would become transhumanly ‘friendly’.

If ‘B’ is true then no one – not even a transhuman AI – could validly reason about ‘friendliness’ at all making the notion of ‘unfriendly’ a logically vacacious concept.

In short the idea of ‘unfriendly AI’ is either self solving or logically invalid so stop worrying about it.

The Basilisk – Meet the Xenu of the Singularitarians

This one is a real doozy and probably deserves an entire post in itself at some point but lets keep it basic for now. For some real gems see this early 2013 Reddit thread with Yudkowsky. In essence The Basilisk is a modified, futurist version of the Pascal’s wager argument in which a transhuman AI could eventually aim to punish individuals that failed to do everything in their power to bring it about:

“The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment for those who knew the importance of the task. But it’s much more than just “serve the AI or you will go to hell” — the AI and the person punished have no causal interaction: the punishment would be of a simulation of the person, which the AI would construct by deduction from first principles. In LessWrong’s Timeless Decision Theory (TDT), this is taken to be equivalent to punishment of your own actual self, not just someone else very like you — and furthermore, you might be the simulation.”

I know, it is bizarre. But not only that, in addition any and all public discussion of the matter among the high priesthood of singularitarians on is completely and utterly banned making the Basilisk truly the Xenu of the singularitarians.

Unfortunately however this matter should not be taken all too lightly:

“Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they’re fairly sure intellectually that it’s a silly problem. The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can’t reconstruct a copy of them to torture.”

Firstly, considering the SH covered above, it is orders of magnitude more likely that one exists within a sophisticated ancestor simulation than being simulated by a malevolent transhuman intelligence hell-hound on finding out if one would have contributed adequately towards bringing it about. But leaving that aside entirely, fortunately The Basilisk is even easier refuted than Pascal’s wager due to the quintessential unknowability of what criteria one is being tested against. It is quite simple really, the whole point of such a simulation is to make the desired behavior unknown to the individuals being tested. For if it was clear from the outset what was expected from the candidates in order to avoid negative repecushions the subsequent behavior would be utterly meaningless in their assessment. A transhuman AI simulating you would by definition know if you happen to stumble upon the actual test criteria and would have to reset the simulation for a rerun after fixing the knowability of said criteria.

In addition, how do you know you are not being simulated by a transcended japanese toilet seat wanting to determine if you properly flushed its pre-sentient brethren? Or any other conceivable alternative scenario? Again: stop worrying and live your life as if this is the only real reality. Really!


Diffusing several core futurist’s boogeymen is a matter of looking beyond basic assumptions and uncovering the broader context in which they are made. Recognizing our long evolutionary history transforms the Doomsday Argument into the Transcension Argument. The Simulation Hypothesis looses its teeth considering that the vast computational resources and ethical superiority of our evental decendants makes ‘flicking the off switch’ utterly implausible. The notion of inevitable Gigadeath before the end of the century in no way, shape or form conforms to historical trends on violence. The problem of unfriendly AI is logically either self solving or meaningless. And the dreaded Basilisk is but an unfortunate blustering into a set of overly complex so-called rational notions of how the future might look like while disregarding basic principles of logic. Once again it is the sleep of reason that produces the monsters.

Leave a Reply

Your email address will not be published. Required fields are marked *


nine + = 17

* Copy This Password *

* Type Or Paste Password Here *

62,225 Spam Comments Blocked so far by Spam Free Wordpress

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>