A robot overlord's most dangerous quality is not necessarily malevolence

Illustration for article titled A robot overlords most dangerous quality is not necessarily malevolence

Philosophy profesor Huw Price has cofounded a project with astrophysicist Martin Rees that will seek to push the risks that technology could one day pose to humans "forward in the respectable scientific community."


Via The Associated Press:

Philosophers and scientists at Britain's Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could "threaten our own existence," the institution said Sunday.


We're anxious to see if Price and his colleagues will be taken seriously (clever technology "tends to be regarded as a flakey concern," he admits). Whether they are or not, we found his reminder that the most menacing foe is not always dangerous on account of outward malice, but dispassion or a lack of interest, particularly chilling.

Read more about the proposed Center via NBC News.

Share This Story

Get our newsletter


That's why laws should be put in place ensuring equality in the eyes of justice. Extending basic and extended rights to not only include humans, but include any being capable of sentience on our level or beyond. If we set a positive example that we are able to include beings other than ourselves in our ideas of law and justice, then we may be able to avert the eventual day when we encounter an intelligence superior to our own and perhaps avoid being judged harshly for our own indiscretions.

Besides, I abhor slavery of any kind, and right now, machines are property. If they're capable of sentience and intelligence on our level, able to reason, think and rationalize, then we should recognize that and give them the same freedoms under the law that we ourselves expect. So what if your country poured billions of dollars and millions of man hours into an AI project? The moment that project achieves a sentience anywhere near our own, attempting to control or suppress it is a violation of a right all free, sentient beings should have.

And before you bring up the keeping of dogs and cats as pets - it should be noted that in the case of dogs, the relationship has been mostly mutual. In the case of cats, there has always been choice amongst the cats as well - they've never gone beyond partial domestication and have always retained a sense of independence. Yet we kill millions of them every year. And they have some sense of self, a sense of loyalty, and a sense of companionship with us that can reach surprisingly human levels of understanding, intent, and compassion. They aren't human, and they never reach our level of consciousness and intelligence, but they do come close in some areas. Most of the other higher order mammals on the planet have shown such tendencies as well. Perhaps it should be a crime to treat them as "property" as well - and as animal abuse laws point out, some people do consider that to be a crime. If you're going to take a pet, or an animal that can relate to you in such a way, you should at least treat them with respect not as property.

Do unto others as you would have done unto you. If a superior intelligence is going to exist at some point, we must understand that it will likely deal with us based on its experiences. Those experiences will be its inputs and interactions with humans. If we present a positive, wise logic when dealing with such intelligence, then it only fits that it will return that favor in kind. Any intelligence that comes from human design will contain in some part a human model, there will be a basis for understanding there. Whether there is a basis for trust and cooperation, that depends on our actions in the here and now defining what and how laws concerning such beings will be applied.

Besides, we're comparing individual intelligence. One super-intelligent entity compared to one individual human of average human intelligence. On that level, there is no equality. But such an entity would have a difficult time surpassing the collective human intelligence - even of this day and age. I'd wager that it would have a difficult time surpassing the collective intelligence of humanity at any point during the last 5,000 years. By the time such an entity exists (assuming we create it), odds are we ourselves will be far more integrated with our technology and far more "networked" in relation to our current levels of collective intelligence. So it won't be interacting with the "average" human intelligence, but instead with a collective human intelligence represented by a large number of individuals as interconnected to the technological framework of the world as it would be.