Advertisement

Can fables, fairy tales teach robots morality?

The goal of systems like Quixote is to mesh programmable goals and actions with human values.

By Brooks Hays
Researchers are working to give robots a moral compass by teaching them fables and fairy tales like "Pinocchio." File photo by UPI Photo/Joan Marcus/DreamWorks Theatricals
Researchers are working to give robots a moral compass by teaching them fables and fairy tales like "Pinocchio." File photo by UPI Photo/Joan Marcus/DreamWorks Theatricals | License Photo

ATLANTA, Feb. 17 (UPI) -- Researchers at Georgia Tech are attempting to give robots a sense of right and wrong by teaching them fairy tales.

The teaching is done via programming. Mark Riedl and Brent Harrison, researchers at Georgia Tech's School of Interactive Computing, have developed software that allows robots to read fables and glean proper sequences of events.

Advertisement

The software empowers robots to recall relevant and socially appropriate sequences when responding to real human behaviors and interactions.

"The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature," Riedl, associate professor and director of the Entertainment Intelligence Lab, said in a press release.

"We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won't harm humans and still achieve the intended purpose."

The software, called Quixote, builds on a previous system designed by Riedl. The Scheherazade system featured an algorithm enable artificial intelligence to recognize socially acceptable action sequences through story plots crowdsourced from the Internet.

The system is able to analyze and code event sequences as acceptable or not acceptable and link them with programmed reward signals to encourage ethical behavior and punish antisocial actions.

Advertisement

The goal of systems like Quixote is to mesh programmable goals and actions with human values.

In their latest paper on the topic, Riedl and Harrison show that given a scenario and programmable goal, robots can use Quixote to consider several courses of action and then select those that most align with socially acceptable sequences.

For example, a robot programmed to retrieve money from a bank might realize robbing a bank is the fastest way to gain access to large amounts of cash. But Quixote would empower the robot to opt for the more socially acceptable behavior -- waiting in line at the ATM.

Quixote isn't a fully fledged moral compass in computer form, but a primitive start in promoting ethical behaviors in robots.

"We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior," Riedl said. "Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual."

Latest Headlines