TMCnet News

Updating Asimov: Engineers say new laws needed to govern human-robot relationships
[November 22, 2009]

Updating Asimov: Engineers say new laws needed to govern human-robot relationships


Nov 22, 2009 (The Columbus Dispatch - McClatchy-Tribune Information Services via COMTEX) -- If robots someday turn on humans, David Woods knows who he'll blame.

Humans.

How do we keep this from happening? Laws.

Sound familiar? Author Isaac Asimov created a list of them when he wrote I, Robot, his 1950 collection of nine robot-themed short stories. But Woods, an Ohio State University engineering professor, says Asimov focused on the wrong side of the equation. Those laws were all about the robot.



People and robots make up a system, Woods says, and designing that system so that it works well is up to humans.

"People say: 'Robots will be self-aware eventually.' That's a retreat from responsibility," Wooods said. "Once something goes wrong, it's a device. Who programmed it? Who directed it?" Woods and Robin Murphy, who designs rescue robots at Texas A&M University, have written their own set of robot laws that focus squarely on our responsibility to design and program robots that work with people.


Their work recently appeared in the journal IEEE Intelligent Systems.

Woods wants to step away from the scenarios from The Terminator, or even WALL-E, that make robots heroic and autonomous. Today's robots are a long way from that.

They need lots of direction and they need to know when to cede what autonomy they do have to human operators with better judgment. And they need to know when to ask for help.

When the Spirit rover landed on Mars in 2004, it immediately sent back pictures and then sat tight, waiting for instructions. Had it started moving right away, it could have become tangled in the air bags that protected its landing. That would have ended the mission before it started.

"They had to get some more information and do some planning before they let it go," Woods said. "The system of the robot and the people working with the robot had to adapt." That Spirit is in its sixth year of exploring Mars is more a testament to human planning and design than to the plucky robot that could.

In Columbus, the Fire Division's bomb squad has worked with robots since 1991. And while the machines have improved over the years, their effectiveness still depends mostly on the skill of the human operator, said Jim Andrews, a bomb technician.

"They're only as good as the robot technician," he said. The operators decide what tools the robot needs to handle the situation, he said. And it's the operator and the bomb team that must use their creativity to get each job done.

While they have guidelines to help them make decisions, there's always something in each situation that no set of "if, then" statements could cover.

"That's when you use the human factor," Andrews said.

Even in the military, which is on the cutting edge of robot technology, people are always part of the equation.

Woods remembers meeting two people: a Marine general and a roboticist.

"The roboticist says, 'I'm here to talk about the autonomy of my robot, what it can do by itself,' " Woods said. "The Marine general looked him and said, 'We don't let anybody do anything by themselves.' " An MQ-9 Reaper unmanned aerial vehicle armed with Hellfire missiles lost contact with its operator over Afghanistan in September, according to Aviation Weekly.

Because its designers planned for that scenario, the aircraft was programmed to circle for a time to try to regain contact. Failing that, the craft was hardwired to fly back to the base. Neither fail-safe system worked, and a jet pilot was sent to shoot the craft down before it could leave Afghan airspace.

"They did the right thing," Woods said.

"These things break. They're real devices in the real world and in difficult places in the world." Woods, an adviser to the panel examining the explosion of the space shuttle Columbia, is an expert is in the emerging field of "resilience engineering," which is about figuring out how to design complex systems that can adapt and succeed even when surprises arise.

"The high-reliability organizations are the ones that are constantly aware that things could go wrong in the near future and are constantly trying to investigate, learn and prepare in case things do go wrong," Woods said.

"The ones who are good are always worried." [email protected] Rewriting robotic laws Starting with Isaac Asimov's Three Laws of Robotics, engineers David Woods (below right) and Robin Murphy created a set of laws that places the responsibility for robot behavior squarely on the people who design and use them.

Asimov's Laws: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Woods and Murphy's laws: 1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.

2. A robot must respond to humans as appropriate for their roles.

3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.

To see more of The Columbus Dispatch, or to subscribe to the newspaper, go to http://www.columbusdispatch.com. Copyright (c) 2009, The Columbus Dispatch, Ohio Distributed by McClatchy-Tribune Information Services. For reprints, email [email protected], call 800-374-7985 or 847-635-6550, send a fax to 847-635-6968, or write to The Permissions Group Inc., 1247 Milwaukee Ave., Suite 303, Glenview, IL 60025, USA.

[ Back To TMCnet.com's Homepage ]