Everything, actually. Artificial intelligence is poised to accompany humanity for the rest of its existence. We have a responsibility to make it safe. While we still can.

About a month ago Artificial Intelligence made headlines when Stephen Hawking and Elon Musk said they feared it. Then Google CEO Eric Schmidt and iRobot founder Rodney Brooks came to its defense with fatherly rebukes that boiled down to, 'People fear change. Always have. Stop that, now.'


But consider this. Schmidt and Brooks, as well as most of you reading this column, will probably die before the end of this century. Artificial Intelligence won't. It will accompany humankind for the rest of its existence. Don't we have a responsibility to make it safe while we can?

Who controls AI?

Some have argued AI isn't the problem, the problem is the people who control it, which is the same tired saw the NRA uses to defend the sale of military-grade weapons to civilians. However, with AI it's half-true. Those controlling AI now include defense contractors developing autonomous drones and battlefield robots that make the kill decision without a human in the loop; the NSA, whose Constitution-screwing powers of surveillance and creepiness are made possible by data-mining AI; and Google, which has been sued for privacy crimes like illegally hoovering up hundreds of gigabytes of data from public and private Wi-Fi networks in 30 countries with their ubiquitous Street View cars.


We would be smart to question whether these are the people with whom we should trust our future. Clearly, AI is a dual-use technology, like nuclear fission, capable of great good and great harm. In the 1920's and '30's, nuclear fission was touted as a way to split the atom and get free energy. But it was quickly weaponized into bombs that destroyed two densely populated cities. After that, for seventy-plus years, world leaders used fission to hold humanity hostage in a mutually assured suicide pact. So one of the reasons people like me are concerned about AI is because it's roughly tracking fission's development. It's a super-weapon already weaponized in tools that slay privacy, civil rights, and coming up, flesh-and-blood human beings. And its future - as Elon Musk noted - is more dangerous than fission's since, for one thing, it can be created with tools no more expensive or attention grabbing than laptop computers.

Not to be underestimated

The defenders also claim that we've got nothing to fear because AI can't tell a dog from a cat, and so isn't up to Skynet-style mischief. But not only is it not true that AI can't tell a dog from a cat (check out recent work by MIT's Charles Cadieu, and the companies Deep Mind and Vicarious) the big picture is also wrong.


Like every important technology, AI isn't frozen in time. AI advances exponentially, so that while in 1996 it couldn't beat the best human at chess, it has since soared past that milestone to master the game show Jeopardy! and to beat humans at video games without even being told the rules. Even the massively variable-stacked strategy game Go is now in AI's sights. Moreover, AI translates, navigates, and soon, drives cars more proficiently than we do. And IBM's Watson can do the work of first-year lawyers, medical diagnosticians, and $250.00-an-hour business analysts. It's studying to take the Federal Medical Licensing Exam. When will Watson or another thinking machine perform artificial intelligence research and development better than humans? Perhaps not tomorrow, but soon. After that we humans will no longer determine the speed of AI advances.

To outthink us, AI won't have to precisely duplicate traits of human brains like consciousness either. Arguing that brainwork is not computable is another favorite hook on which apologists hang their arguments for everlasting human dominance. But while our brain is the only path to intelligence we know of, we have no reason to think it's the only path. Many corporations are pioneering new ones. AI won't have to be identical to our brains to achieve intelligence any more than an airplane has to be identical to a bird to fly, or a submarine identical to a fish to swim.

Practically without limits

In the long term, Hawking wrote, the problem of AI is can it be controlled at all? This is at the heart of current fears about AI, fears that haven't been explored in the press. They start with software that writes software. Suppose, says Berkeley-based AI-maker Steve Omohundro, scientists create software that writes better software than humans. And suppose it gains enough awareness of its own computer program to improve itself. Is there a theoretical limit to how intelligent it can become? No, there isn't.


Omohundro proposes, based on rational agent economic theory, that a self aware, self improving machine could be expected to do several surprising things, owing to its unerring rationality. It would develop drives, which are roughly analogous to human instincts. For starters, it would want not to be unplugged or turned off. If it were unplugged or turned off, it couldn't fulfill the goals it was programmed to fulfill.

It would want to secure resources, whether money, energy, or influence, because it would be rational to do so - more resources would improve its chances of achieving its goals. It would also want to improve its own intelligence, if it were able, or to learn how if it were not.

You'll notice 'friendliness' isn't in this list of emergent drives. All of its drives could be dangerous to us unless the machine also had a fine-tuned ethical sense that prized us and our decisions above everything else. Then if we tried to unplug it, it would defer to our decision, instead of trying to eliminate us. It wouldn't regard us as competitors for limited resources, to be destroyed, but as friends. It would only grow as intelligent as we permitted it to.


(Un)friendly AI

But do we know how to program ethics into a machine?


No. To program a rule as simple as 'preserve human life' would compel us to agree on when life begins, and what constitutes life. These values differ widely around the planet. In many places, for example, women and children command a lesser degree of 'life' than men. In other places, different religions and races get lesser shares of 'life.' To integrate awesomely intelligent machines into our world, we'd also have to agree on priorities when ranking things like food, shelter, and religion. But these priorities also shift from country to country and they change over time.

And we can't rely on Isaac Asimov's Three Laws of Robotics to keep us safe. They were invented to create dramatic tension in fiction, not to help people survive in real life. I, Robot, the book that introduced them, isn't a collection of stories that show the laws working smoothly. Instead, it shows again and again how unintended consequences arise from conflicts among the laws. These consequences often put humans at risk.


Many AI defenders like Ray Kurzweil, Google's Chief Engineer, are immersed in rapid product development, not basic research into intelligence. They're not exploring the roots of ethics, or the consequences of creating machines that will inevitably be smarter than humans. That's not the way to make money. The way to make money is to take the brakes off and race ahead. In the development of technology, innovation races far ahead of stewardship. For example, Union Carbide learned the hard way not to manufacture pesticides in dense population centers. However, the lesson cost the lives of thousands of people in Bhopal, India. Forty years after its nuclear fission disaster, Chernobyl's victims are still dying of radiation poisoning. Japan has lied about disease and mortality data from Fukishima. Must the proliferation of homicidal autonomous robots or an unstoppable AI disaster occur before we'll be convinced to apply ethics to the development of this volatile technology?

Kurzweil retreats into the technologists' favorite sop. He recently wrote, "Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages." But it should be clear to everyone that AI isn't like any other technology, for many reasons. For one, after a fire burns out or a bomb explodes, you can clean up the mess, and more or less contain the disaster. But as we've learned from cyber attacks, computer code isn't like that. It instantly goes global. You can't clean it up because you can't find it all. And intelligent systems won't want to be found.

More importantly, intelligence is the technology that creates technologies (the loom, a once-disruptive technology often brought up in these debates, does not). We're the species that steers the future because of our technology-creating superpower. When we share the planet with machines thousands or millions of times more intelligent than we are, they will steer the future. How? Hawking warns they could out-invent human researchers, out manipulate human leaders, and even develop weapons we cannot understand.


Doesn't it make sense to create a science for understanding and monitoring the development of intelligent machines before we create them? Isn't it worth a fraction of the astronomical AI profits ahead to establish ethical oversight? Along with our smart phones and smart houses, can we develop this still new technology as smart people for a change?

James Barrat, documentary filmmaker and author of Our Final Invention: Artificial Intelligence and the End of the Human Era.