You've probably heard of a concept known as the Technological Singularity — a nebulous event that's supposed to happen in the not-too-distant future. Much of the uncertainty surrounding this possibility, however, has led to wild speculation, confusion, and outright denial. Here are the worst myths you've been told about the Singularity.
Top image: Babel Myth by Frederic St. Amaud via 3Dtotal.
In a nutshell, the Technological Singularity is a term used to describe the theoretical moment in time when artificial intelligence matches and then exceeds human intelligence. The term was popularized by scifi writer Vernor Vinge, but full credit goes to the mathematician John von Neumann, who spoke of [in the words of Stanislaw Ulam] "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."
By "not continue" von Neumann was referring to the potential for humanity to lose control and fall outside the context of its technologies. Today, this technology is assumed to be artificial intelligence, or more accurately, recursively-improving artificial intelligence (RIAI), leading to artificial superintelligence (ASI).
Because we cannot predict the nature and intentions of an artificial superintelligence, we have come to refer to this sociological event horizon the Technological Singularity — a concept that's open to wide interpretation, and by consequence, gross misunderstanding. Here are the worst:
"The Singularity Is Not Going to Happen"
Oh, I wouldn't bet against it. The onslaught of Moore's Law appears to be unhindered, while breakthroughs in brainmapping and artificial intelligence continue apace. There are no insurmountable conceptual or technological hurdles awaiting us.
And what most ASI skeptics fail to understand is that we have yet to even enter the AI era, a time when powerful — but narrow — systems subsume many domains currently occupied by humans. There will be tremendous incentive to develop these systems, both for economics and security. Superintelligence will eventually appear, likely the product of megacorporations and the military.
This myth might actually be the worst of the bunch, something I've referred to as Singularity denialism. Aside from maybe weaponized molecular nanotechnology, ASI represents the greatest threat to humanity. This existential threat hasn't reached the zeitgeist, but it'll eventually get there, probably after our first AI catastrophe. And mark my words, there will come a day when this pernicious tee-hee-rapture-of-the-nerds rhetoric will be equal to, if not worse than, climate change denialism is today.
"Artificial Superintelligence Will Be Conscious"
Nope. ASI's probably won't be conscious. We need to see these systems, of which there will be many types, as pimped-up versions of IBM's Watson or Deep Blue. They'll work at incredible speeds, be fueled by insanely powerful processors and algorithms — but there will be nobody home.
To be fair, there is the possibility that an ASI could be designed to be conscious. It might even re-design itself to be self-aware. But should this happen, it would still represent a mind-space vastly different from anything we know of. A machine mind's subjective experience would scarcely resemble that of our own.
As an aside, this misconception can be tied to the first. Some skeptics argue there will be no Singularity because we'll never be able to mimic the complexities of human consciousness. But it's an objection that's completely irrelevant. An ASI will be powerful, sophisticated, and dangerous, but not because it's conscious.
"Artificial Superintelligence Has to Be Friendly"
There's a meme among some Singularitarians that goes like this: As intelligence increases, so too does empathy and benevolence. According to this thinking, as AIs become smarter and smarter, we should expect to see them become friendlier and friendlier.
Sadly, this won't be the case. First, this reasoning implies (1) a certain level of self-reflexivity and introspection on the part of the ASI (which is absolutely not a given), and (2) a utility function or ethical imperative that's closely aligned with our own. On this last point, we can't possibly predict or know the cogitations of a completely alien machine mind — one that's drifted several orders of magnitude beyond our own — or what it would find morally valuable or not. Moreover, if it's programmed with a set of goals that are unalterable, it will always prioritize those initial parameters beyond anything else, as illustrated by the infamous paperclip scenario. As AI theorist Eliezer Yudkowsky has said, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
"Accelerating Change is the Singularity"
Ever since the term became popular, we've been told to expect several different kinds of Singularities — some of which don't even have anything to do with ASI. We've been told to expect an economic singularity or even a razor blade singularity. Some have even equated (or conflated) the Singularity with radical life extension, mind uploading, transhuman intelligence, and the merging of humans with machines (more on that next). Kurzweilians are particularly guilty of this, often equating the Singularity with the steady, accelerating growth of all technologies, including AI — a perspective that largely fails to account for an uncontrollable intelligence explosion.
"Humans Will Merge With the Machines"
Some say we don't need to worry about the Singularity because we'll just tag along for the ride. By the time the Singularity arrives, goes the argument, we'll be so closely integrated with our machines we'll be one-in-the-same. It'll be a Singularity for everyone!
The first problem with this theory is that human cyborgization and/or uploading will happen at a much slower pace than advancements in AI (mostly for ethical reasons). The second problem is that the immediate source of an RIAI will be highly localized. It'll be one system (or multiple systems working in tandem to take advantage of synergistic effects and/or game-theoretic strategies designed to ensure future freedom of action) that suddenly goes off the deep-end, iteratively improving upon itself as it works to achieve a certain goal or configuration (a so-called "hard takeoff" Singularity scenario). In other words, we'll just be bystanders to the Singularity.
Sure, an ASI may decide to merge itself with as many humans as possible — but that has some rather dystopian connotations to it.
"We Will Be As Gods"
If we survive the Singularity, and assuming there's still a place for us in a completely redesigned machine-ruled world, we may collectively possess unprecedented powers. We may be able to exert these "god-like" gifts as a hive mind. But as individuals, not so much. The jury is still out on how much intelligence a single mind can handle. Referring to radical intelligence augmentation (IA) for humans, futurist Michael Anissimov has said,
One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.
Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.
"Things Won't Change Too Radically After the Singularity"
Hardly. Think of the Technological Singularity as a hard reset button on virtually everything — right down to each and every molecule on Earth. As long as the laws of physics and theoretical computation will allow it, an AWOL machine mind could make anything happen. What lies beyond the Singularity is nothing we can imagine — a conundrum that may be hindering science fiction.