English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Multimodal Human-Robot Interface with Gesture-Based Virtual Collaboration

Song, E., Niitsuma, M., Kubota, T., Hashimoto, H., & Son, H. (2013). Multimodal Human-Robot Interface with Gesture-Based Virtual Collaboration. In J.-H. Kim, E. Matson, H. Myung, & P. Xu (Eds.), Robot Intelligence Technology and Applications 2012: An Edition of the Presented Papers from the 1st International Conference on Robot Intelligence Technology and Applications (pp. 91-104). Berlin, Germany: Springer.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Song , EU, Author
Niitsuma, M, Author
Kubota, T, Author
Hashimoto, H, Author
Son, HI1, 2, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: This paper proposes an intuitive teleoperation scheme by using human gesture in conjunction with multimodal human-robot interface. Further, in order to deal with the complication of dynamic daily environment, the authors apply haptic point cloud rendering and the virtual collaboration to the system. all these functions are achieved by a portable hardware that is proposed by authors newly, which is called “the mobile iSpace”. First, a surrounding environment of a teleoperated robot is captured and reconstructed as the 3D point cloud using a depth camera. Virtual world is then generated from the 3D point cloud, which a virtual teleoperated robot model is placed in. Operators use their own whole-body gesture to teleoperate the humanoid robot. The Gesture is captured in real time using the depth camera that was placed on operator side. The operator recieves both the visual and the vibrotactile feedback at the same time by using a head mounted display and a vibrotactile glove. All these system components, the human operator, the teleoperated robot and the feedback devices, are connected with the Internet-based virtual collaboration system for a flexible accessibility. This paper showcases the effectiveness of the proposed scheme with experiment that were done to show how the operators can access the remotely placed robot in anytime and place.

Details

show
hide
Language(s):
 Dates: 2013
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1007/978-3-642-37374-9_10
 Degree: -

Event

show
hide
Title: 1st International Conference on Robot Intelligence Technology and Applications (RiTA 2012)
Place of Event: Gwangju, South Korea
Start-/End Date: 2012-12-16 - 2012-12-18

Legal Case

show

Project information

show

Source 1

show
hide
Title: Robot Intelligence Technology and Applications 2012: An Edition of the Presented Papers from the 1st International Conference on Robot Intelligence Technology and Applications
Source Genre: Proceedings
 Creator(s):
Kim, J-H, Editor
Matson, ET, Editor
Myung, H, Editor
Xu, P, Editor
Affiliations:
-
Publ. Info: Berlin, Germany : Springer
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 91 - 104 Identifier: ISBN: 978-3-642-37373-2

Source 2

show
hide
Title: Advances in Intelligent Systems and Computing
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 208 Sequence Number: - Start / End Page: 91 - 104 Identifier: -