Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Stabilization of oneself in Virtual Reality: Interaction of visual and vestibular cues

MPG-Autoren
/persons/resource/persons84026

Kreher,  BW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84287

von der Heyde,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Kreher, B., von der Heyde, M., & Bülthoff, H. (2001). Stabilization of oneself in Virtual Reality: Interaction of visual and vestibular cues. Poster presented at 4. Tübinger Wahrnehmungskonferenz (TWK 2001), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-E2EC-7
Zusammenfassung
Although different sensory organs have quite different characteristics, humans have no problems in their combined evaluation. Particularly, when asked to stabilize their position in space, humans need the capability to integrate the different sensory inputs. For the following stabilization task humans mainly use the vestibular, visual, and proprioceptive senses. To study this sensor fusion in a body stabilization task, we used a motion platform with six degrees of freedom for the vestibular stimulus, a head mounted display (HMD) for the visual stimulus, and a joystick as an input device. The motion platform and the HMD simulated the physical model of an inverse pendulum. Using the joystick, the subject could exert a force (acceleration) on the pendulum and thereby control the state of the model. In our experiments, the subjects had to balance themselves on the pendulum against changes in roll, yaw, or both axes simultaneously. They had either vestibular information, visual information, or both. The visual stimulus was a random-dot cloud with limited life-time dots and an artificial horizon in order to match the character of the vestibular stimulus (absolute positional information for roll, but only information about changing of position for yaw). Subject performed a pre-test, six training sessions, and a post-test. In the pre- and post-test sections, the subjects had to perform a stabilization task for all nine possible conditions (each lasting 200 seconds). For the training section, the four subjects were divided into two groups receiving visual or vestibular input (VISGroup and VESTGroup, respectively). During the training section, the performance of all subjects showed a large overall improvement. In the in pre- and post-test of the yaw stabilization task, subjects performance (mean absolute positional error) was much better with visual than with vestibular stimulus (pre-test: vestibular 6.00°, visual 2.99°, t(3)=7.32, p<0.005; post-test: vestibular 5.18°, visual 1.64°, t(3)=6.83, p<0.006). For the roll task, all subjects had a much higher increase in performance with the vestibular than with the visual stimulus (vestibular 2.73°, visual 0.98° decrease of the average absolute position from pre- to post-test, t(3)= 6.55, p<0.007). Finally, the VESTGroup showed a significant improvement in the visual roll task (pre-test 3.02°, post-test 1.73° standard deviation, t(1)=14.8, p<0.043). The VISGroup showed also a large but non-significant improvement in the vestibular roll task (pre test 4.26°, post test 2.19° standard deviation). This suggests that subjects are able to transfer their learned skill from one input modality to another.