Children learn about appropriate behavior through the guidance of their parents, formal teaching, and the examples of other people. Children’s books can also help kids learn good values that influence their conduct. And while kids don’t always manage to act in the appropriate ways books have encouraged them to act, reading (and talking about books) does influence the way a child behaves.
But our question is, could the same thing be done with robots? If kids can be trained to apply literary lessons to daily life, couldn’t robots be trained with even greater efficiency to behave according to the moral standards of our human world?
Researchers in the field of artificial intelligence are trying to apply this same method of teaching moral values to robots. They have developed a way of coding children’s stories into forms that can teach robots to behave with kindness and consideration.
Researchers at the School of Interactive Computing at the Georgia Institute of Technology have developed a system that teaches robots to read children’s stories that model acceptable behavior. The system, known as Quixote, then presents the artificial agent with a reward signal when it makes choices that match acceptable behaviors. In effect, the system trains the agent to behave as an ethical protagonist in its own story rather than behaving randomly or antisocially.
The robot is then rewarded or punished for each decision it makes within its own “plot” as it undertakes tasks. Researchers hope that the learned behaviors make it possible for robots to interact with humans positively and safely.
Stories that are specific to the culture in which the robot will be working can be used to train it. Called “encultured” behavior (PDF), this helps the robot learn appropriate social norms that allow it to function within different social contexts. The stories a culture tells to its children reflect its values. This makes stories useful for teaching that culture to outsiders, including both humans and artificial intelligences.