Quantum physicist David Deutsch has penned a provocative and must-read article for Aeon Magazine in which he argues that artificial intelligence might someday be possible — but only if a major breakthrough is made in our fundamental understanding of how human cognition and consciousness works. And to get there, he says, we'll need to listen to what the philosophers have to say.
Deutsch opens his essay noting how six decades of research into the subject has resulted in virtually no progress. He suggests that the very laws of physics imply that artificial intelligence must be possible. But the roadblocks to development, he says, are bolstered by ill defined concepts, human prejudices, grossly underrated and neglected areas of research, and just plain human ignorance.
First and foremost, argues Deutsch, is that we need to properly distinguish between simple AI and what's called artificial general intelligence, or AGI. An AI, says Deutsch, can be anything as simple as a chatbot or an algorithm that helps something like Siri follow your commands on an iPhone.
An AGI, on the other hand, is an attempt to approximate the way a human mind works — including self awareness. It's only by carefully distinguishing between the two that we'll be able to stop devaluing the potential for AGI and move forward, he says. Consequently, Deutsch suggests that we need to adopt a "philosophy of mind" approach to supplement our computer, cognitive, and neurological sciences. He writes:
Perhaps the reason self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Gödel's theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. And so has consciousness. And for consciousness we have the problem of ambiguous terminology again: the term has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations ("qualia"), which is intimately connected with the problem of AGI; but at the other end, "consciousness" is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.
AGIs will indeed be capable of self-awareness – but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. That does not mean that apes who pass the mirror test have any hint of the attributes of "general intelligence" of which AGI would be an artificial version. Indeed, Richard Byrne's wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic.
Essentially, Deutsch argues that, should an AGI be developed, we would have no choice but to refer to it as "people."
And indeed, the issue of personhood is yet another area in which philosophy can inform the subject. Not only will a proper understanding of personhood work to bring about a self-aware AGI, it will also provide a guideline on how to treat such an entity once it emerges. Deutsch writes:
Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word "programming". Treating AGIs like any other computer programs would constitute brainwashing, slavery and tyranny. And cruelty to children too, because "programming" an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. Ignoring the rights and personhood of AGIs would not only be the epitome of evil, but a recipe for disaster too: creative beings cannot be enslaved forever.
There's lots more to Deutsch's excellent essay, and I strongly suggest you check it out — and be sure to read his thoughts on how we might be able to deal with the perils of greater-than-human machine intelligence.
Top image via. Inset image imredesiuk/shutterstock.com