Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

How real is virtual reality really? comparing spatial updating using pointing tasks in real and virtual environments

MPG-Autoren
/persons/resource/persons84170

Riecke,  BE
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84287

von der Heyde,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

pdf629.pdf
(beliebiger Volltext), 2MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Riecke, B., von der Heyde, M., & Bülthoff, H. (2001). How real is virtual reality really? comparing spatial updating using pointing tasks in real and virtual environments. Poster presented at First Annual Meeting of the Vision Sciences Society (VSS 2001), Sarasota, FL, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-E183-8
Zusammenfassung
When moving through space, we continuously update our egocentric mental spatial representation of our surroundings. We call this seemingly effortless, automatic, and obligatory (i.e., hard-to-suppress) process “spatial updating”. Our goal here is twofold: 1) To quantify spatial updating; 2) Investigate the importance and interaction of visual and vestibular cues for spatial updating. In a learning phase (20 min) subjects learned the positions of twelve targets attached to the walls, 2.5m away. Subjects saw either the real environment or a photo-realistic copy presented via a head-mounted display (HMD). A motion platform was used for vestibular stimulation. In the test phase subjects were rotated to different orientations and asked to point “as quickly and accurately as possible” to four targets announced consecutively via headphones. In general, subjects had no problem mentally updating their orientation in space and were as good as for rotations where they were immediately returned to the original orientation. Performance, quantified as response time, absolute pointing error and pointing variability, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40×30 deg), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to IGNORE ego-turns (i.e., “point as if not having turned”) than to UPDATE them as usual. Speeded pointing tasks proved to be a viable method for quantifying “spatial updating”. We conclude that, at least for the limited turning angles used (<60í), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented.