日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Animated self-avatars in immersive virtual reality for studying body perception and distortions

MPS-Authors
/persons/resource/persons215077

Paul,  S
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Space and Body Perception, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84088

Mohler,  BJ
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Space and Body Perception, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Paul, S., & Mohler, B. (2015). Animated self-avatars in immersive virtual reality for studying body perception and distortions. In IEEE VR Doctoral Consortium 2015 (pp. 1-3).


引用: https://hdl.handle.net/21.11116/0000-0000-AED6-B
要旨
So far in my research studies with virtual reality I have focused on using body and hand motion tracking systems in order to animate different 3D self-avatars in immersive virtual reality environments (head-mounted displays or desktop virtual reality). We are using self-avatars to explore the following basic research question: what sensory information is used to perceive ones body dimensions? And
the applied question of how we can best create a calibrated selfavatar for efficient use in first-person immersive head-mounted display interaction scenarios. The self-avatar used for such research questions and applications has to be precise, easy to use and enable the virtual hand and body to interact with physical objects. This is what my research has focused on thus far and what I am developing for the completion of my first year of my graduate studies. We plan
to use LEAP motion for hand and arm movements and the Moven
Inertial Measurement suit for full body tracking and the Oculus DK2 head-mounted display. A several step process of setting up and calibrating an animated self-avatar with full body motion and hand tracking is described in this paper. First, the user’s dimensions will be measured, they will be given a self-avatar with these dimensions, then they will be asked to perform pre-determined actions (i.e. touching objects, walking in a specific trajectory), then we
will in real-time estimate how precise the animated body and body parts are relative to the real world reference objects, and finally a scaling of the avatar size or retargetting of the motion is performed in order to meet a specific minimum error requirement.