LONDON, April 23 (UPI) -- So-called killer robots could pose major legal questions if military forces develop them to be completely autonomous, Human Rights Watch said from London.
Human Rights Watch said it was coordinating the global "Campaign to Stop Killer Robots." The organization said Tuesday it was concerned that military forces may be developing autonomous weapons that would be able to strike targets without human interaction.
Director of the Arms Division at Human Rights Watch Steve Goose said that human interaction is needed in battlefield decisions.
"Killer robots would cross moral and legal boundaries and should be rejected as repugnant to the public conscience," he said in a statement.
The rights organization noted that the U.S. Defense Department in November said a person has to be "in-the-loop" in decisions involved the use of lethal force.
The U.S. government has sparked concern over its use of unmanned aerial vehicles in its campaign against suspected terrorists on foreign soil.
Georgia Institute of Technology robotics Professor Ronald Arkin told the BBC the rights campaign may be premature.
"A moratorium as opposed to ban -- where we say, 'We're not going to do this until we can do it right' -- makes far more sense to me than simply crying out, 'ban the killer robots,'" he said.