English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Avatars in Virtual Reality

MPS-Authors
/persons/resource/persons84088

Mohler,  B
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Publisher version)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Mohler, B. (2013). Avatars in Virtual Reality. Dagstuhl Reports, 3(6), 61-62.


Cite as: https://hdl.handle.net/21.11116/0000-0001-4F36-B
Abstract
Avatars are an increasingly popular research topic in the field of virtual reality. The first 20-30 minutes of our discussion was spent discussing exactly what participants meant by “Avatars” and it was clear that our definitions and needs for virtual humans fell into several categories. Avatars are often defined as digital models of people that either look or behave like the users they represent (see [1]). However, other terms like virtual humans (virtual characters that try to represent a human as close in fidelity as possible) or social agents (virtual characters that fulfill a certain purpose through artificial intelligence) are also often referred to as avatars. Avatars can be achieved in multiple ways, i.e. video based capture[3], pre-made avatars experienced as the user's own body due to first-person perspective and visual-motor or visual-tactile stimulation (i.e. in [2]) and physical projections of video captured data[4]. These are just a few of the many manifestations of avatars in virtual reality. In order to achieve high-fidelity virtual agents that act in a human way many problems need to be solved by a multi-disciplinary research group. Virtual social agents must be able to move like humans, have casual conversation, appear intelligent, be interactive, be both reactive and proactive (specific to the user), be empathetic, perform certain functions, follow basic rules of proxemics, receive and give sensory feedback (visual, tactile, auditory). Some of the most promising applications for avatars and social agents in virtual reality are: telepresence, ergonomics/simulation, training, teaching and education, medical and health, basic science (understanding human behavior, see [5]), and of course gaming and entertainment. In this discussion time we had three breakout groups where we tried to define grand challenge examples for avatar research. One group discussed the challenges involved with the ability to remotely care for an elderly parent or remotely put your child to bed (as a second parent). These challenges involve communicating face to face, physical interaction (to comfort/support, to help with household tasks), observe monitor mental and physical health signs, the believable presence of the remote parent to a child (visual, voice, size) and the ability to embody a remote avatar. Another group discussed a scenario for avatars in the medical health profession and education of medical professionals, specifically where limited discourse is occurring. Important to these scenarios are the ability to build trust, convey empathy and have confidence in the sometimes uncertain or emotional information that is being shared. Finally, another group considered the challenge of being able to have a portable self-representation which could be brought into the virtual reality application you are using. The challenges here are system challenges of having a standard for virtual reality with regard to model, animation method and ethical issues with regard to data security. Specifically this group considered how the data for individual avatars might be collected, e.g. cameras only, motion capture suits, physiological measures such as heart rate, skin conductance and brain waves. Specifically, the question was raised: Which measures help increase fidelity and which ones go ethically too far?