English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Mobile multimodal human-robot interface for virtual collaboration

Song, Y., Niitsuma, M., Kubota, T., Hashimoto, H., & Son, H. (2012). Mobile multimodal human-robot interface for virtual collaboration. In 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom) (pp. 627-631). Piscataway, NJ, USA: IEEE.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0001-8AD0-8 Version Permalink: http://hdl.handle.net/21.11116/0000-0001-8AD1-7
Genre: Conference Paper

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Song, YE, Author
Niitsuma, M, Author
Kubota, T, Author
Hashimoto, H, Author
Son, HI1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: This paper proposes an intuitive teleoperation scheme by using human gesture in conjunction with multimodal human-robot interface. Further, in order to deal with the complication of dynamic daily environment, the authors apply haptic point cloud rendering and the virtual collaboration to the system. all these functions are achieved by a portable hardware that is proposed by authors newly, which is called “the mobile iSpace”. First, a surrounding environment of a teleoperated robot is captured and reconstructed as the 3D point cloud using a depth camera. Virtual world is then generated from the 3D point cloud, which a virtual teleoperated robot model is placed in. Operators use their own whole-body gesture to teleoperate the humanoid robot. The Gesture is captured in real time using the depth camera that was placed on operator side. The operator recieves both the visual and the vibrotactile feedback at the same time by using a head mounted display and a vibrotactile glove. All these system components, the human operator, the teleoperated robot and the feedback devices, are connected with the Internet-based virtual collaboration system for a flexible accessibility. This paper showcases the effectiveness of the proposed scheme with experiment that were done to show how the operators can access the remotely placed robot in anytime and place.

Details

show
hide
Language(s):
 Dates: 2012-12
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: DOI: 10.1109/CogInfoCom.2012.6422055
 Degree: -

Event

show
hide
Title: IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom 2012)
Place of Event: Kosice, Slovakia
Start-/End Date: 2012-12-02 - 2012-12-05

Legal Case

show

Project information

show

Source 1

show
hide
Title: 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: Piscataway, NJ, USA : IEEE
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 627 - 631 Identifier: ISBN: 978-1-4673-5187-4