English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Multimodal multi-user human-robot interface for virtual collaboration

MPS-Authors
/persons/resource/persons84227

Son,  HI
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Song, Y., Niitsuma, M., Kubota, T., Hashimoto, H., & Son, H. (2012). Multimodal multi-user human-robot interface for virtual collaboration. In 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (pp. 18-22). Piscataway, NJ, USA: IEEE.


Cite as: https://hdl.handle.net/21.11116/0000-0001-8AD9-F
Abstract
We present a intuitive teleoperation scheme by using human gesture and multimodal multi-user human-robot interface. Further, in order to deal with the dynamic daily environment, we apply haptic point cloud rendering and virtual collaboration and all these functions are achieved by portable hardwares which is called “mobile iSpace”. First, a surrounding environment of a teleoperated robot is captured and reconstructed as the 3D point cloud using a depth camera. Virtual world is then generated from the 3D point cloud, and a virtual teleoperated robot model is placed in there. An user uses their own whole body gesture to teleoperate the humanoid type robot. The gesture is captured by the depth camera placed in user-side in real time. And the user has the visual and vibrotactile feedback at the same time by using a head mounted display and a vibrotactile glove. All these system components e.g. the human user, the teleoperated robot and the feedback devices are connected with the Internet-based virtual collaboration system to support a flexible accessibility. So eventually, this makes possible that users can access the remote placed robot whenever and wherever they want.