Are we underestimating the risk of human extinction?

Illustration for article titled Are we underestimating the risk of human extinction?

Is humanity at risk of going extinct ? Oxford philosopher Nick Bostrom (who is also famous for the claim that we are all living in a computer simulation) thinks that it is; but even more troubling, he says, is how much we tend to underestimate that risk.

The Atlantic's Ross Anderson recently sat down with Bostrom to discuss the dangers threatening our existence in greater detail. We've included a few of Anderson's introductory discussion points here, but you'll want to click through to read the interview in its entirety.

Some have argued that we ought to be directing our resources toward humanity's existing problems, rather than future existential risks, because many of the latter are highly improbable. You have responded by suggesting that existential risk mitigation may in fact be a dominant moral priority over the alleviation of present suffering. Can you explain why?

Bostrom: Well suppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially—-somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do. There are so many people that could come into existence in the future if humanity survives this critical period of time—-we might live for billions of years, our descendants might colonize billions of solar systems, and there could be billions and billions times more people than exist currently. Therefore, even a very small reduction in the probability of realizing this enormous good will tend to outweigh even immense benefits like eliminating poverty or curing malaria, which would be tremendous under ordinary standards.

In the short term you don't seem especially worried about existential risks that originate in nature like asteroid strikes, supervolcanoes and so forth. Instead you have argued that the majority of future existential risks to humanity are anthropogenic, meaning that they arise from human activity. Nuclear war springs to mind as an obvious example of this kind of risk, but that's been with us for some time now. What are some of the more futuristic or counterintuitive ways that we might bring about our own extinction?

Bostrom: I think the biggest existential risks relate to certain future technological capabilities that we might develop, perhaps later this century. For example, machine intelligence or advanced molecular nanotechnology could lead to the development of certain kinds of weapons systems. You could also have risks associated with certain advancements in synthetic biology.

Of course there are also existential risks that are not extinction risks. The concept of an existential risk certainly includes extinction, but it also includes risks that could permanently destroy our potential for desirable human development. One could imagine certain scenarios where there might be a permanent global totalitarian dystopia. Once again that's related to the possibility of the development of technologies that could make it a lot easier for oppressive regimes to weed out dissidents or to perform surveillance on their populations, so that you could have a permanently stable tyranny, rather than the ones we have seen throughout history, which have eventually been overthrown.

And why shouldn't we be as worried about natural existential risks in the short term?

Bostrom: One way of making that argument is to say that we've survived for over 100 thousand years, so it seems prima facie unlikely that any natural existential risks would do us in here in the short term, in the next hundred years for instance. Whereas, by contrast we are going to introduce entirely new risk factors in this century through our technological innovations and we don't have any track record of surviving those.

Now another way of arriving at this is to look at these particular risks from nature and to notice that the probability of them occurring is small. For instance we can estimate asteroid risks by looking at the distribution of craters that we find on Earth or on the moon in order to give us an idea of how frequent impacts of certain magnitudes are, and they seem to indicate that the risk there is quite small. We can also study asteroids through telescopes and see if any are on a collision course with Earth, and so far we haven't found any large asteroids on a collision course with Earth and we have looked at the majority of the big ones already.


Read the rest over on The Atlantic
Top image by Vladimir Manyuhin via Professional Photography Blog

Share This Story

Get our newsletter


Corpore Metal

I think it's false to separate the boring, depressing but necessary and realistic stuff like:

* efforts to end the global poverty that malaria and cholera thrive in

* ending the poverty that represses women's political and economic power

* ending the poverty permits endless petty wars to persist

* ending the poverty that allows homelessness to continue.

From the exciting, futuristic, science fiction-y stuff like:

* Efforts to keep ourselves from being wiped out as yet nonexistent nanoweapons

* Efforts to keep sapient artificial life from wiping us all out

* Building space elevators and space colonies.

Serendipity happens all the time in history. Who can really say how one effort might unexpectedly lead to progress in another? Perhaps some kid who doesn't die of dysentery—thanks to spending the money on decent plumbing in her city—in Africa then goes on to figure out how to build the blue goo that stops the red goo from killing us all. Perhaps brainstorming methods to control red goo gives us better ways to control bioweapons we already have?

It works both ways I think. We can't we do both and denigrate neither?