English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions

Lallée, S., Pattacini, U., Boucher, J. D., Lemaignan, S., Lenz, A., Melhuish, C., et al. (2011). Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2895-2902).

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Lallée, S., Author
Pattacini, U., Author
Boucher, J. D., Author
Lemaignan, S., Author
Lenz, A., Author
Melhuish, C., Author
Natale, L., Author
Skachek, S., Author
Hamann, K.1, Author           
Steinwender, J.1, Author           
Sisbot, E. A., Author
Metta, G., Author
Alami, R., Author
Warnier, M., Author
Guitton, J., Author
Warneken, F.1, Author                 
Dominey, P. F., Author
Affiliations:
1Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society, ou_1497671              

Content

show
hide
Free keywords: action learning, action perception, action primitive composition, action representation, adaptive robot, adaptive systems, agency attribution, Cognition, compositional action execution specification, Context, gesture recognition, goal-directed action imitation, human action recognition, human motion tracking, human-robot interaction, humans, human understanding, iCub robot, image motion analysis, Jido robot, Kinect motion capture system, learning by example, learning systems, physical state change, platform-independent cooperative human-robot interaction system, platform-independent perceptual system, Robot kinematics, robot learning, Robot sensing systems, robot vision, Sparks, spoken language understanding, teleological reasoning, visual perception
 Abstract: If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including ?????????put X on Y????????? and ?????????take X?????????. In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions ?????????put????????? and ?????????take????????? demonstrating how the system learns to execute these same actions, along with new related actions ?????????cover????????? and ?????????uncover????????? based on the composition of action primitives ?????????grasp X????????? and ?????????release X at Y?????????. Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platform-independence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion. Finally in Experiment 5 we demonstrate how the combined representation of action in terms of perception and execution provides the basis for imitation. This provides the basis for a- open ended cooperation capability where new actions can be learned and integrated into shared plans for cooperation. Part of the novelty of this research is the robots' use of spoken language understanding and visual perception to generate action representations in a platform independent manner based on physical state changes. This provides a flexible capability for goal-directed action imitation.

Details

show
hide
Language(s): eng - English
 Dates: 2011
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1109/IROS.2011.6094744
 Degree: -

Event

show
hide
Title: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
Place of Event: San Francisco, CA
Start-/End Date: 2011-09-25 - 2011-09-30

Legal Case

show

Project information

show

Source 1

show
hide
Title: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 2895 - 2902 Identifier: ISBN: 978-1-61284-456-5