Machines That Lie
Robots Taught How to Deceive
"Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They’ve taught robots how to deceive.
It probably seemed like a good idea at the time. Military robots capable of deception could trick battlefield foes who aren’t expecting their adversaries to be as smart as a real soldier might be, for instance. But when machines rise up against humans and the robot apocalypse arrives, we’re all going to be wishing that Ronald Arkin and Alan Wagner had kept their ideas to themselves.
The pair detailed how they managed it in a paper published in the International Journal of Social Robotics. Two robots — one black and one red — were taught to play hide and seek. The black, hider, robot chose from three different hiding places, and the red, seeker, robot had to find him using clues left by knocked-over colored markers positioned along the paths to the hiding places.
However, unbeknownst to the poor red seeker, the black robot had a trick up its sleeve. Once it had passed the colored markers, it shifted direction and hid in an entirely different location, leaving behind it a false trail that managed to fool the red robot in 75 percent of the 20 trials that the researchers ran. The five failed trails resulted from the black robots’ difficulty in knocking over the correct markers.
“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” Wagner says. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”
"Researchers at the Georgia Institute of Technology may have made a terrible, terrible mistake: They’ve taught robots how to deceive.
It probably seemed like a good idea at the time. Military robots capable of deception could trick battlefield foes who aren’t expecting their adversaries to be as smart as a real soldier might be, for instance. But when machines rise up against humans and the robot apocalypse arrives, we’re all going to be wishing that Ronald Arkin and Alan Wagner had kept their ideas to themselves.
The pair detailed how they managed it in a paper published in the International Journal of Social Robotics. Two robots — one black and one red — were taught to play hide and seek. The black, hider, robot chose from three different hiding places, and the red, seeker, robot had to find him using clues left by knocked-over colored markers positioned along the paths to the hiding places.
However, unbeknownst to the poor red seeker, the black robot had a trick up its sleeve. Once it had passed the colored markers, it shifted direction and hid in an entirely different location, leaving behind it a false trail that managed to fool the red robot in 75 percent of the 20 trials that the researchers ran. The five failed trails resulted from the black robots’ difficulty in knocking over the correct markers.
“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” Wagner says. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”
0 Comments:
Post a Comment
<< Home