Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Visually Plausible Human-Object Interaction Capture from Wearable Sensors

Guzov, V., Sattler, T., & Pons-Moll, G. (2022). Visually Plausible Human-Object Interaction Capture from Wearable Sensors. Retrieved from https://arxiv.org/abs/2205.02830.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2205.02830.pdf (Preprint), 22MB
Name:
arXiv:2205.02830.pdf
Beschreibung:
File downloaded from arXiv at 2023-01-10 12:32
OA-Status:
Grün
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Guzov, Vladimir1, Autor           
Sattler, Torsten2, Autor
Pons-Moll, Gerard1, Autor                 
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: In everyday lives, humans naturally modify the surrounding environment
through interactions, e.g., moving a chair to sit on it. To reproduce such
interactions in virtual spaces (e.g., metaverse), we need to be able to capture
and model them, including changes in the scene geometry, ideally from
ego-centric input alone (head camera and body-worn inertial sensors). This is
an extremely hard problem, especially since the object/scene might not be
visible from the head camera (e.g., a human not looking at a chair while
sitting down, or not looking at the door handle while opening a door). In this
paper, we present HOPS, the first method to capture interactions such as
dragging objects and opening doors from ego-centric data alone. Central to our
method is reasoning about human-object interactions, allowing to track objects
even when they are not visible from the head camera. HOPS localizes and
registers both the human and the dynamic object in a pre-scanned static scene.
HOPS is an important first step towards advanced AR/VR applications based on
immersive virtual universes, and can provide human-centric training data to
teach machines to interact with their surroundings. The supplementary video,
data, and code will be available on our project page at
http://virtualhumans.mpi-inf.mpg.de/hops/

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2022-05-052022
 Publikationsstatus: Online veröffentlicht
 Seiten: 24 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2205.02830
BibTex Citekey: Guzov2205.02830
URI: https://arxiv.org/abs/2205.02830
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: