"The relationship [between humans and machines] is profoundly social," says Stanford researcher Clifford Nas, who specializes in communication between humans and interactive media, in a recent interview with NPR's Alix Spiegel. "The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human... people will respond with an enormous array of social responses."
In 1996, Nas demonstrated that humans observe the rule of reciprocity with machines — if a computer does something helpful for you, you're more likely to help the computer in return. In 2007, robotics professors Christopher Bartneck took things a bit further.
In a unique stress-test of the social bond uniting humans with machines, Bartneck devised an experiment to observe how humans would react when tasked with "taking the life" of an anthropomorphic robot pleading for its survival. The results of Bartneck's experiment were remarkable:
NPR's Alix Spiegel reflects on Bartneck's findings:
At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram's would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: "They would essentially eliminate everything that the robot was - all of its memories, all of its behavior, all of its personality would be gone forever."
In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. "You are not really going to switch me off, are you?" the cat robot begs, and the humans sit, confused and hesitating. "Yes. No. I will switch you off!" one female research subject says, and then doesn't switch the robot off.
It stands to reason that the more relatable a machine or robot is, the harder it would be for most people to deactivate it. Is it reasonable to assume that we will, one day soon, encounter robots that will be so sympathetic as to inspire mercy, pity or charity? As Scott Adams, creator of Dilbert, puts it:
What happens in the near future when robots begin to acquire the appearance of personality? Will you still be willing to hit the kill switch on an entity that has been your "friend" for years? I predict that someday robots will be so human-like that the idea of decommissioning one permanently will literally feel like murder. Your brain might rationalize it, but your gut wouldn't feel right. That will be doubly true if your robot has a human-like face.