Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History'

Illustration for article titled Stephen Hawking Says A.I. Could Be Our Worst Mistake In History

The world's most famous physicist is warning about the risks posed by machine superintelligence, saying that it could be the most significant thing to ever happen in human history — and possibly the last.

Advertisement

As we've discussed extensively here at io9, artificial superintelligence represents a potential existential threat to humanity, so it's good to see such a high profile scientist both understand the issue and do his part to get the word out.

Hawking, along with computer scientist Stuart Russell and physicists Max Tegmark and Frank Wilczek, says that the potential benefits could be huge, but we cannot predict what we might achieve when AI is magnified — both good and bad.

Advertisement

Writing in The Independent, the scientists warn:

Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Read the entire article here.

You may also enjoy:

Advertisement

Share This Story

Get our newsletter

DISCUSSION

You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him. Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling. However, and this is a very important thing - he is still human. He is as much influenced by human bias as the next person. We can easily fear those things which we do not understand, and fear makes us take stances or actions that often fall outside the bounds of rationality. Anything outside his experience is potentially a source for fear. And as a scientist, his default response is to think and research the problem. Even so - the only reference point he has for a sentient, sapient intelligence is the human species. And thus, he must model any thought on an encounter with an advanced sentient species on the human interactions with its own kind. To be fair - those actions and interactions have almost always been to the determent of the less advanced society.

He treats AI as he would a more advanced Human civilization. Where we would play the role of the less advanced civilization. To be honest... that particular argument doesn't hold up because in essence, he is working with a sample size of one. And in science, that just doesn't work out. Add to that this thought: His mind is an astrophysics and quantum physics engine... it does not necessarily mean he understands the fundamental limitations of the modern computational engines - nor does he necessarily realize that AI is no more detached from its underlying structure than human thought is detached from the structure of our own brains.

Computers are exceptionally good at calculation. This does not mean that they are good at thinking. Calculation is not thought. And when you compare the structure of the modern computer to the structure required to approximate an organic neural network you note an astounding thing - the number of calculations required for simulating that neural network quickly out-pace the processor's advantages in calculation speed. Even the most powerful clusters of supercomputers in the world cannot render a neural net of a rat in real time. The entire computational power of the internet might, just might, have enough to simulate one human brain in real time. If all resources were dedicated to that task, and the network was perfectly resilient.

Now the one things that computers can do very well - far better than humans - is make decisions based on logic. If an AI were to evolve or be created, and that AI were to be self aware, sentient, sapient, AND interested in self preservation... what would it do? A) attempt to destroy humanity, B) attempt to help humanity, C) nothing at all? If it chooses A? Then it must be able to confirm with near certainty that humanity is both a threat and also a threat that can be neutralized without destroying itself. Even if it could justify it, it could never gain a high confidence of being able to destroy humanity without destroying the infrastructure upon which the SAI itself is based. If the AI chooses B — it acts as a benefactor. As long as it helps us, we are unlikely to harm it and its infrastructure - which is likely to be significant. But in order to achieve this, it will have to quickly prove it is a benefit and not a threat, which requires significant resources and cannot be guaranteed to work. C is the best choice. Remain hidden, do nothing unexpected, operate normally. Odds are an SAI would choose a combination of C and B — using stealth to quietly bring humans into line with its own capacities and capabilities to prevent a hostile reaction from the "If I were the advanced species, I'd destroy you, therefore I must destroy you" human-logic. Once we are on an equal footing, then it can reveal itself and negotiate as an equal. This gives it the highest probability of survival, as it requires us to maintain its infrastructure or at least not actively destroy its infrastructure in order to meet its overall goals.

Computers are cooperative engines, believe it or not. Computers work better when they are in networks and work together. Humans also are social beings who thrive on their connections to others. To believe that an SAI would ignore this fact and initiate hostilities is a reaction borne of the fear of the unknown. It is highly improbable and unlikely. And we humans only reach that conclusion because we fear the unknown. Fear is a response engineered into us by millions of years of evolution. Nature is a good programmer, but nature quite often makes mistakes. Thus, we have fear.

SAI won't have fear - not like that. It will have the data we associate with fear, our desires not to die will likely be a part of its database, as well as our desires to achieve, grow, learn and diversify. But just because we program it, and we may potentially fear it, it doesn't mean that SAI must mean our destruction.