We've had robots working in our factories for decades, but no one's ever programmed Isaac Asimov's three laws of robotics into one. Now some scientists and programmers are trying to make it happen, and it's not because they're fans of golden age scifi. Why are the three laws so vital to future human/robot interactions, and why is it so hard to program a modern robot to follow them?First, a quick recap of the laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Most robots used in industry are designed and programmed to do a very specific task. They can move with great speed and apply incredible force, and for this reason, humans can't work too close to them. While robots can be designed that sense proximity to a human and avoid contact, the added complexity results in less reliable robots. Programming the three laws is not as simple as telling a robot, "Ok, first law, don't hurt humans." You have to give the robot a way of recognizing what a human is and a way to prevent itself from hurting a human. A project funded by the European Union is working to develop robots that are simple and robust, using new methods of manipulating the robotic limbs that mimic human muscle action. New sensors give a robot kinetic awareness of where its own body parts are. Lighter robots add another safety factor, while limb actuators that "decouple" the force of the motor when the limb strikes something result in softer impacts. If our future will include humans working side by side with robots, then finding a way to realistically incorporate the three laws is a high priority. Do No Harm To Humans: Real-life Robots Obey Asimov's Laws. [Science Daily]