Search Box

Friday, December 4, 2015

Robots Imitate Babies

Robots Learn Imitation Tasks from Babies

Greg Watry | December 2, 2015

If household robots [are] to enter the layman’s arena, a hurdle must be conquered. Rather than relying on being programmed to accomplish tasks—an impossibility as not all humans are adept at computer programming—a robot must learn from demonstration.
Univ. of Washington psychologists and computer scientists are working to make this a reality by creating robots capable of learning like a human infant. The research was published in PLOS One.  

photos of gaze experiments
Source: http://www.washington.edu/news/2015/12/01/uw-roboticists-learn-to-teach-robots-from-babies/

<more at http://www.rdmag.com/articles/2015/12/robots-learn-imitation-tasks-babies; related links: http://www.sciencedaily.com/releases/2015/12/151201131715.htm (Roboticists learn to teach robots from babies. December 1, 2015) and http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0141965 (A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning. Michael Jae-Yoon Chung, Abram L. Friesen, Dieter Fox, Andrew N. Meltzoff, and Rajesh P. N. Rao. DOI: 10.1371/journal.pone.0141965. November 4, 2015. [Abstract: A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.])>

No comments:

Post a Comment