The classic form of the Doomsday Argument says it’s more likely that we’re closer to the end of our civilization than the beginning. In other words, apocalyptic destruction awaits us in the not-too-distant future. But a recent re-interpretation of this argument has slightly improved our prospects for survival.
The Doomsday Argument (DA) has been around for 30 years. It was first proposed by the astrophysicist and philosopher Brandon Carter in an unpublished paper. Though many subsequent papers have tried to defeat it, it has — quite infuriatingly — stood the test of time; if there was ever an argument we’d like to disprove, this would be the one.
Since Carter’s first formulation of the argument, several other philosophers have taken it further. Back in 1996, philosopher John Leslie published his book, The End of the World: The Science and Ethics of Human Extinction, in which he presented it in more detail. It’s for this reason that the idea is often called the Carter-Leslie Doomsday Argument. Interestingly, the DA has been independently discovered by others, including J. Richard Gott and Holger Bech Nielsen.
But regardless of the thinker, each one came to the same disturbing conclusion: Doom is soon.
Birth Order Matters
It’s rare for philosophers to make predictions, and even rarer still to make predictions based on actual data. But while the DA attempts to predict our prospects for survival, it does so based on probabilistic reasoning, a healthy application of Bayes Rule, the Copernican Principle (i.e. we don’t occupy a special place in the universe), and the self-sampling assumption (i.e. you should reason as if you were randomly selected from a group of individuals).
As a philosophical exercise, the DA cannot predict how human civilization might come to end — say by nuclear war or an asteroid impact — but it can predict the likelihood of such an event given our current place in the roll-call of all potential humans.
The DA asks us to look at our birth order. No, not in your own immediate family — but the family of all humans who have ever been born — and who are still yet to come. According to the Population Reference Bureau, more than 107 billion people have lived on Earth since the advent of our species. You need to reference your precise place in that total roll-call against all humans still to be born.
Indeed, what the DA asks us to do is evaluate — or rather predict — our number in the roll-call relative to the whole. And this is where things get interesting — and disturbing.
To better explain this, I’m going to use a much smaller sample size.
Let’s say I’ve put you into one of two groups: a group consisting of 10 members and a group consisting of 100 members. You have no idea which of the two groups you belong to — but I’ve assigned a number to each member of each group. Members of the small group get numbered 1 to 10, and the second group 1 to 100. Now, at random, I pick out a number, and that number is 72. Clearly, you belong to the larger group. But suppose I pick out the number seven. What are you to believe now? A simple assessment of probability says it’s considerably more likely that you’re in the small group. It’s not a certainty, just much more probable.
We can use this similar logic to explain the DA. Let’s place the number of all humans who will ever live into two similar groupings, one that gets destroyed soon (Doom Early), and one that gets destroyed a long, long time from now (Doom Late). The population difference between the DE group and the DL group will be off by an order of magnitude. Thus, given your place in the roll-call, it’s more likely that you’re in the smaller subset than the larger.
Using this line of reasoning and Bayes’ formula, John Leslie concluded that we can be 95% certain that we are among the last 95% of all the humans ever to be born. Specifically, by using the figure of 70 billion humans born so far, he estimated that there is a 95% chance no more than 1.4 trillion humans will ever live. By looking at the rate of population growth, Leslie figured that we’d reach this point in about 10,000 years.
Other philosophers have taken a more severe approach to the DA, effectively arguing that humanity has a near-zero chance of being a Doom Late civilization.
Needless to say, the DA attracts a lot of heat. According to Oxford professor Nick Bostrom, there are as many papers published in support of the argument as there are in opposition to it.
“Yet despite being subjected to intense scrutiny by a growing number of philosophers,” he says, “no simple flaw in the argument has been identified.
One of the more potent criticisms came in 1998 by K.B. Korb and J.J. Oliver who essentially argued that the DA is a gross oversimplification and that it violates reasonableness. They also argued that a sample size of one (i.e. oneself) is too small to make a serious difference to one’s rational beliefs, and that the DA could also be applied to one’s own life span. (Bostrom offers rebuttals to each of these objections in his paper, “The Doomsday Argument is Alive and Kicking”)
There are many other objections, including the idea that being born within the first 5% of all humans is not purely a coincidence; Ken Olum’s self-indication assumption (the possibility of not existing at all and that your very existence should give you reason to think that there are many observers); and the notion that the sample group, namely all possible humans, is far too limited (e.g. the DA doesn’t take posthumans or other human-spawned intelligences (like uploads) into account).
And indeed, it’s these last two points that the new paper by Austin Gerig, Ken Olum, and Alexander Vilenkin is predicated upon. Regrettably, their new interpretation of the argument still suggests that our odds of survival are low, but that an adjustment in the way we think about the DA should give us cause for optimism.
Intriguingly, their argument is a kind of philosophical mash-up of the DA and the Great Filter hypothesis, the idea that advanced space-faring extraterrestrial civilizations are rare, or even non-existent. They argue that many civs exist in the universe, but that they can be broken down into two categories: those that are short-lived (civs that die out before developing the capacity to colonize space and thus explode in population), and those that are long-lived (interstellar civs).
They admit that this model is not realistic in detail, but that “it may well capture the bimodal character of the realistic size distribution.” Space-faring civs could be huge in terms of population.
According to the authors, if long-lived interstellar civs were common — those with a million times more people than short-lived ones — it would be more likely than not that we should find ourselves in one of those civs. But because we don’t find ourselves in such a civ, we should probabilistically conclude that (1) advanced space-faring civs are rare and (2) we are more likely in a short-lived civilization. Consequently, as the authors point out, this means that we’re probably doomed in the near term.
But, their new argument contains a kind of caveat that starts with this interesting point:
The specific issue [that concerns] us here is the possibility that our universe might contain many civilizations. In that case, we should consider ourselves to be randomly chosen from all individuals in that universe or multiverse.
In other words, we shouldn’t consider our random roll-call in the space of all possible humans, but in the space of all possible individuals living across the entire universe. Consequently, our chance to be in any given long-lived civilization is higher than our chance to be in any given short-lived civilization. But this can only work if there are lots and lots of civilizations — something we’re not certain of (and as already noted above may be unlikely).
So, if there are many civilizations, the Doomsday Argument is defeated.
Consequently, the strength of the DA’s predictive abilities lies in the number of civs that can survive existential risks and go interstellar — a figure that sits anywhere from zero to 100%. Unlike other DA thinkers — who essentially place these figures at or close to zero — Gerig and colleagues have upped it to between 1 to 10%.
The authors conclude:
If there is a message here for our own civilization, it is that it would be wise to devote considerable resources (i) for developing methods of diverting known existential threats and (ii) for space exploration and colonization. Civilizations that adopt this policy are more likely to be among the lucky few that beat the odds. Somewhat encouragingly, our results indicate that the odds are not as overwhelmingly low as suggested by earlier work.
The message of the Doomsday Argument, therefore, is that we need to become that fortunate 1 to 10%.