Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Sonstige

Extracting Affordance Cues from Observed Human-Object Interactions

MPG-Autoren
/persons/resource/persons84056

Lies,  J-P
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Lies, J.-P. (2008). Extracting Affordance Cues from Observed Human-Object Interactions.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CAB7-8
Zusammenfassung
Autonomous, cognitive agents, e.g. mobile robots, have become more and more important through the last years. In order to obtain a high level of autonomy, an agent has to be able to interact with objects even though it might not have seen this kind of object before. Here, categorizing objects by its functionality seems to be the clue. In this thesis, we introduce briefly the idea of functional object category detection in the context of human-object interaction. We present a system which extracts visual shape descriptions (affordance cues) of object parts that are characteristic for a certain task based on observation of a prototypical interaction. The system is implemented and integrated into a cognitive agent framework which allows cooperation with e.g. manipulation systems. We show that the system has the applicability to extract affordance cues for different grasping techniques on different objects but also highlight restrictions of the system w.r.t. the scenery where the interaction is observed. In cooperation with other reseach groups, we were able use the detected affordance cues to detect objects in cluttered scenes and as input for manipulation.