Given the vastness of space, it may only be a matter of time before we make contact with intelligent extraterrestrials. But how might an alien civilization react to such a monumental meet-and-greet, and can we possibly know their intentions? Here’s what we might expect.

Advertisement

Alien civilizations will most assuredly be like snowflakes: no two will be the same. Each will differ according to an array of factors, including their mode of existence, age, history, developmental stage, and level of technological development. That said, advanced civilizations may have a lot in common as they adapt to similar challenges; we all share the same Universe, after all.

We’ve obviously never interacted with an alien civilization, so we have virtually no data to go by. Predicting alien intentions is thus a very precarious prospect — but we do have ourselves to consider as a potential model, both in terms of our current situation and where we might be headed as a species in the future. With this in mind, I analyzed three different scenarios in my effort to predict how extraterrestrial intelligences (ETIs) might react to meeting us:

Advertisement

  1. Contact with a biological species much like our own
  2. Contact with a post-biological species more advanced than ours
  3. Contact with a superintelligent machine-based alien intelligence

Clearly, there may be other alien typologies out there, but there’s no sense trying to predict what they might be like, especially in terms of their intentions.

Cut from the Same Cosmological Cloth

There’s virtually no way that an alien species will appear and behave exactly like us, but that doesn’t mean we won’t share certain similarities; in reality, we may be more alike than not — especially if we’re both still at the biological stage of our development.

Sponsored

As evolutionary biologist Richard Dawkins pointed out in Climbing Mount Improbable, there’s no long term planning involved in evolution, but species do move towards fitness peaks, i.e. they tend to get better at specialized tasks over time (a classic example is the spider web, which is considered an optimal “design” in nature). What’s more, some species separated by time and space have been known to evolve startlingly similar traits, a phenomenon biologists refer to as convergent evolution. It’s not unreasonable to surmise, therefore, that an alien species with human-like intelligence — and the physical attributes to exert that intelligence on its environment — will share certain things in common with humans, including technologies and inherited behaviors.

Image: Simulations show that virtual spiders exhibit similar web-building behavior to real spiders. Extrapolating to humans and aliens, it’s conceivable that our technologies and socio-political organization also converge around similar “fitness peaks.” (credit: Thiemo Krink & Fritz Vollrath)

Advertisement

Advertisement

In his new book, The Runes of Evolution, evolutionary biologist Conway Morris argues that, while the number of possibilities in evolution is astronomical, the number that actually work is an infinitesimally smaller fraction.

“Convergence is one of the best arguments for Darwinian adaptation, but its sheer ubiquity has not been appreciated,” he noted in a recent University of Cambridge article. “Often, research into convergence is accompanied by exclamations of surprise, describing it as uncanny, remarkable and astonishing. In fact it is everywhere, and that is a remarkable indication that evolution is far from a random process. And if the outcomes of evolution are at least broadly predictable, then what applies on Earth will apply across the Milky Way, and beyond.”

Morris contends that biological aliens will likely resemble humans, including features like limbs, heads, bodies — and intelligence. And if our levels of intelligence are comparable, then our psychologies and emotional responses may be similar as well.

Inherited Behaviors

So which inherited behaviors might we share in common?

Advertisement

As a species descended from primates, we are highly social creatures with definite hierarchical tendencies. As Jared Diamond pointed out in Guns, Germs and Steel: The Fate of Human Societies, we’re also risk takers. Indeed, humans are distinct among primates in that we exhibit migratory proclivities; our ancestors frequently abandoned their “natural” environments in search of better ones, or when following migratory creatures like large game. This risk-taking behavior, along with our insatiable curiosity, language skills, and unparalleled conceptual abilities, has allowed us to innovate and organize over the millennia.

Dino World via Jeffrey Morris/FutureDude

Advertisement

But is it fair to project our primate-like attributes onto aliens? Yes and no. Biological aliens are not likely to be primates, but some might be very primate-like. For terrestrial species, the mode of evolution from a Darwinian to a post-Darwinian phase may follow similar patterns. And from a social constructionist perspective, humans and aliens may also share similarities in the socio-political realm.

That said, if alien species evolved from different biological precursors, like animals similar to fish, insects, dinosaurs, birds, or something we don’t observe here on Earth, their behaviors will likely be markedly different, and thus very difficult — but not necessarily impossible — to predict. But it’s fair to say that an overly belligerent, anti-social species, no matter how intelligent or physically adept, is not likely to advance to a post-industrial, space-fairing stage.

Advertisement

If aliens are biologically and socially like us, therefore, they may share many of our desires and proclivities, including our interest in science, and in meeting and interacting with extraterrestrial life. At the same time, however, they may also share our survival instinct and experience trepidation at meeting “the other,” leading to the prioritization of the in-group.

Should we make first contact with an extraterrestrial intelligence (ETI), we’ll have to make sure that we come across as friendly. Hopefully they’ll do the same. But even if we’re happy to meet each other, a major challenge will be in assessing the risks of cultural and technological exchange; just because we get along doesn’t mean that something unintentionally bad could happen. As a historical example, the introduction of Eurasian diseases to the Americas during the colonization era is a potent reminder of what can happen when disparate and formerly isolated civilizations meet.

As Stephen Hawking has said, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans. We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”

An ideal encounter (Star Trek: First Contact)

Advertisement

Advertisement

The late Carl Sagan had a different take, arguing that it’s inappropriate to make historical analogies when discussing alien intentions. A contact optimist, he said it was unlikely that we’d face “colonial barbarity” from advanced ETIs. According to Sagan, alien civilizations that lived long enough to colonize a good portion of the galaxy would be the least likely to engage in aggressive imperialism. He also thought that any “quarrelsome” extraterrestrials would be quashed by a more powerful species. What’s more, he didn’t think that technologically advanced ETIs would have anything to fear from us, so we needn’t fear them.

Close Encounters of the Machine Kind

The hunt for radio signals and laser messages may lead us to discover an alien species much like our own. But if an alien spaceship suddenly appeared at our door, it’s highly unlikely that something biological would come out. More likely, some sort of machine would be there to greet us.

Bots from space (Iron Giant)

Advertisement

As NASA’s Chief Historian Steven J. Dick has pointed out, the dominant form of life in the cosmos is probably post-biological. Advanced alien civilizations, either through their own trans-biological evolution or through the rise of their artificially intelligent progeny, are more likely to be machine-based than meat-based. We ourselves may be heading in this direction, as witnessed by current and pending advances in genetics, cybernetics, molecular nanotechnology, cognitive science, and information technology.

As Dick noted in his paper, “The Postbiological Universe”:

Because of the limits of biology and flesh-and-blood brains...cultural evolution will eventually result in methods for improving intelligence beyond those biological limits. If the strong Artificial Intelligence concept is correct, that is, if it is possible to construct AI with more intelligence than biologicals, postbiological intelligence may take the form of AI. It has been argued that humans themselves may become postbiological in this sense within a few generations.

This line of argumentation led Dick to posit the Intelligence Principle:

Advertisement

Advertisement

The maintenance, improvement and perpetuation of knowledge and intelligence is the central driving force of cultural evolution, and to the extent intelligence can be improved, it will be improved.

Sounds like a cool mission statement — one common to all civilizations as they evolve and adapt to changing conditions over time. Given similar fitness landscapes — like trying to develop a stable and optimal Type II Kardashev Civilization or living in tandem with artificial superintelligence — ETIs may evolve towards a common mode of existence. However, extreme adaptationist pressures, including and especially the mitigation of existential risks, may constrain post-biological life in very narrow ways. Should this be the case, we may eventually be able to predict the nature of this modality. Such an exercise would serve the dual purpose of modeling our future selves and the potential characteristics and tendencies of extraterrestrial civilizations.

Needless to say, post-biological aliens, like cyborgs or civilizations comprised of uploaded minds, would have a different set of priorities than what we’re accustomed to. These ETIs may be content to build their Dyson Spheres and live virtual lives fueled by massive Matrioshka Brains. If this is the case, they may have no desire to make contact with biological beings like ourselves. It’s difficult to know if they’d be willing to make contact with civilizations similar to their own, though it’s likely they’d want to keep to themselves. A kind of intergalactic xenophobia may explain the Great Silence and the Fermi Paradox; the dearth of colonizing waves of ETIs seems to suggest that everyone prefers to stay at home, away from prying eyes.

Advertisement

Image: a megastructure similar to a Dyson ring (Utente/Hill/CC BY-SA 3.0)

At the same time, if machine intelligences do rule the cosmos (either locally or across the vastness of space), then we may run into what’s known as the incommensurability problem. For the time being, the differences between human minds and machine minds is so great that communication is impossible. Simply put, predicting the intentions and behaviors of post-biological intelligence is practically impossible.

Beware the Alien Skynet

As noted by Dick, the cosmos may be peppered with artificial superintelligence (ASI) — machines that either succeeded or supplanted their biological forebears.

Advertisement

Advertisement

Predicting the behaviors and intentions of ASIs is a conundrum currently faced by AI theorists who worry about the prospect of machine minds run amok. But it’s also something that astrobiologists and SETI scientists should be concerned about.

What might a machine-based alien superintelligence do with itself? Frighteningly, it may adopt a set of “instrumental goals” to ensure its ongoing existence. If this is the case, we may want to steer clear of them (and by ‘steer clear’ I mean keep a low cosmic profile). Oxford University philosopher Nick Bostrom explains what’s meant by instrumental goals:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.

In other words, while an alien artificial superintelligence may have a set of primary goals, they will, in the words of Bostrom, “pursue similar intermediary goals because they have instrumental reasons to do so.” He calls this the Instrumental Convergence Thesis.

Advertisement

Physicist and AI theorist Steve Omohundro has taken a stab at trying to predict what these sub-goals might be. His list of drives include self-preservation, self-protection, utility function or goal-content integrity (i.e. ensuring that it doesn’t deviate from its predetermined goals or values), self-improvement, and resource acquisition. Consequently, advanced machine-based alien intelligences may be extraordinarily dangerous to outsiders.

On the bright side, however, a super-powerful machine intelligence may have adopted a primary goal, or utility function, that requires it to remove as much suffering from the Galaxy as possible. Or to create as many meaningful individual experiences as possible, i.e. by converting all useable matter into computronium. Think of it as the pan-Galactic application of the utilitarian ethic. If that’s the case, we should certainly hope to meet them some day. Assuming, of course, that we don’t get destroyed in the process.

Sources: Richard Dawkins: Climbing Mount Improbable | Jared Diamond: Guns, Germs and Steel: The Fate of Human Societies | Michael Michaud: Contact with Alien Civilizations | Nick Bostrom: Superintelligence: Paths, Dangers, Strategies | Steve Omohundro: “The Basic AI Drives” | Nick Bostrom: “The Superintelligent Will” | Conway Morris: Runes of Evolution


Contact the author at george@io9.com and @dvorsky. Top image by Ratislav Zagornov via Concept Ships.