Isaac Asimov's First Law of Robotics states that "a robot may not injure a human being or, through inaction, allow a human being to come to harm." That sounds simple enough — but a recent experiment shows how hard it's going to be to get machines to do the right thing.

Advertisement

Roboticist Alan Winfield of Bristol Robotics Laboratory in the UK recently set up an experiment to test a simplified version of Asimov's First Law. He and his team programmed a robot to prevent other automatons, acting as proxies for humans, from falling into a hole.

New Scientist's Aviva Rutkin explains what happened next:

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK. [emphasis added]

Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, "my answer is: I have no idea".

Experiments like these are becoming increasingly important, especially in consideration of self-driving cars that will have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. These are extremely complicated scenarios with plenty of grey ethical areas. But as Rutkin points out in the NS article, robots designed for military combat may offer some solutions:

Advertisement

Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an "ethical governor" – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Read the entire article at New Scientist.

You may also enjoy: