93 318 54 36

Robotics would improve ethics on the battlefield

14/12/2008
XPinyol

"My research hypothesis is that intelligent robots can behave more ethically on the battlefield than humans do today," says Ronald Arkin, a computer scientist at the Georgia Institute of Technology, who designs for robots on the battlefield, contracted by the US Army. "That's the theory I defend."

Spy planes, mine detectors and robotic sensors are already commonly used on the battlefield, but they are controlled by humans. What Arkin is talking about is real robots that work on their own.

He and others say that the technology needed to make lethal autonomous robots is not very expensive and is proliferating, and that it is only a matter of time until these robots are deployed on the battlefield. This means, they add, that it is time for people to start talking about whether this technology is something they want to make use of.

Noel Sharkey, a computer scientist at the University of Sheffield in the United Kingdom, wrote last year in the magazine Innovative Technology for Computer Professionals that "this is not Terminator-style science fiction, but harsh reality." And he added that South Korea and Israel are already deploying armed robots as border guards.

"We don't want to get to the point where we say we should have had this debate 20 years ago," says Colin Allen, a philosopher at Indiana University in Bloomington and co-author of the new book Moral machines: teaching robots the difference between good and evil.

Randy Zachery, who heads the Army Research Office's Computer Science Department, which funds Arkin's work, says the Army hopes this "basic science" can demonstrate how human soldiers can use autonomous systems and interact with them and that you can develop a that "allows autonomous systems to operate within the limits imposed by the warfighter."

In a report to the Army last year, Arkin outlined some of the potential advantages of autonomous robotic combatants. To begin with, they can be designed without a survival instinct and, therefore, without a tendency to flee out of fear. They can be made so that they do not feel anger or recklessness, Arkin adds, and make them invulnerable to what he calls "the psychological problem of 'fulfillment of expectations,'" which causes people to absorb new information more easily if it matches their expectations. preconceived ideas.

Their report was based on a 2006 survey by Army health authorities, which revealed that less than half of soldiers and Marines stationed in Iraq stated that non-combatants should be treated with dignity and respect and that 17% said that all civilians should be treated as insurrectionists.

Arkin envisions a few ways autonomous robots could be used: in counter-sniper operations, to clear buildings of suspected terrorists, or in other dangerous missions. But first they would have to be programmed with rules and instructions about who to shoot, when it is acceptable to fire, and how to distinguish attacking enemy troops from civilians, the wounded, or someone trying to surrender.

Arkin's battlefield simulations take place on computer screens. Robot pilots have the information that a human pilot could have, such as maps with the locations of temples, apartment buildings, schools and other centers of civilian life. They are taught where exactly the enemy troops, war material and priority targets are. And they are given the rules of combat, guidelines that limit the circumstances in which they can initiate and carry out combat.

In one simulation, a robot pilot flies over a small cemetery. The pilot discovers a tank at the entrance to the cemetery, a possible target. But there is also a group of civilians present, so the pilot decides to continue forward and soon finds another armored vehicle, which is alone in a field. The pilot fires, the target is destroyed.

Some people who have studied this question worry that battlefield robots designed without emotions lack compassion. Arkin, a Christian who acknowledges the help of God and Jesus in the prologue of his 1998 book Behavior-based robotics, reasons that since norms like the Geneva Convention are based on human principles, if incorporated into the mental architecture of a machine, it would endow them with something akin to compassion. Although he adds that it would be difficult to design "perceptual algorithms" capable of recognizing, for example, whether people are injured or waving a white flag.

Arkin considers sparking debate about technology to be the most important part of his job.

Related articles

Do you need to update your website?

Do you need any of our web design services? In IndianWebs We have extensive experience, and a team of programmers and web designers in different specialties, we are capable of offering a wide range of services in the creation of custom web pages. Whatever your project is, we will tackle it.