Should killer robots be a human rights issue?

Illustration for article titled Should killer robots be a human rights issue?

Lethal autonomous robots (LARs) are robots that are programmed to kill enemies without any human intervention or control. Undoubtedly, they are the precursors to Terminator-like combatants. And their existence begs the question: Can a machine violate human rights?

One U.N. representative believes they do. Christof Heyns is the U.N. Special Rapporteur who deals with extrajudicial, summary or arbitrary executions. Last week, he issued a special report on LARs, urging nations to step back and think about the implications of these devices before deploying them widely. He's worried about removing human ethics from the practice of war, and that these robots might become killing machines with no sense of justice.

He's not referring to Terminators just yet, but instead weapons like Samsung's automatic machine gun, the SGR-A1, which guards the border between North and South Korea. It will fire automatically on people who approach and don't know the proper password.


According to a release from the UN:

“If deployed, LARs will take humans ‘out of the loop,’” Mr. Heyns warned. In his view, “States find this technology attractive because human decision-making is often much slower than that of robots, and human thinking can be clouded by emotion.”

“At the same time, humans may in some cases, unlike robots, be able to act out of compassion or grace and can, based on their understanding of the bigger picture, know that a more lenient approach is called for in a specific situation,” he underscored.

The Special Rapporteur stressed that there is now an opportunity to pause collectively, and to engage with the risks posed by LARs in a proactive way, in contrast to other revolutions in military affairs, where serious reflection mostly began after the emergence of new methods of warfare. “The current moment may be the best we will have to address these concerns,” he said.

Of course, ethics are one issue. But malfunctions are just as serious. There have already been accidents with robots like the SGR-A1 malfunctioning and shooting off friendly fire. Similar robots used in industrial production have also gotten buggy and severely maimed factory workers.


What will human rights look like in an age of killer robots who kill without any human giving the order? Obviously, as Heyns indicates, we'll have to change the way we understand things like "summary execution." But we'll also have to rethink our laws. After all, who is responsible when an autonomous robot accidentally kills a soldier fighting alongside it? It seems as if we should figure this question out before we start the next robot war.


Share This Story

Get our `newsletter`



Another perspective on this that may sound strange now, but could be setting a much larger precedent for the future:

What do killer robots mean for Artificial Intelligence rights?

Consider: Killing soldiers on a battlefield ain't easy. Humans are pretty tricky, so that will place "evolutionary" pressure to create better kill-bots. So what happens if we do invent sentient war machines? Not only would we be creating a silicon slave-race, we'd be forcing to perform a task that is by its definition immoral.

What happens if they feel regret at their actions, or sorrow for the consequences? What would it mean if we then tried to erase those reactions from them? Is it ethical to make an intelligent being whose sole purpose is killing other intelligent beings, just because it is inorganic?

Come to think of it, if the robots were smart enough, even a robot-on-robot war would be a continuous cycle of cybernetic genocide.