English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Active In-Hand Object Recognition on a Humanoid Robot

MPS-Authors
/persons/resource/persons83834

Browatzki,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Browatzki, B., Tikhanoff, V., Metta, G., Bülthoff, H., & Wallraven, C. (2014). Active In-Hand Object Recognition on a Humanoid Robot. IEEE Transactions on Robotics, 30(5), 1260-1269. doi:10.1109/TRO.2014.2328779.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0027-7FA9-A
Abstract
For any robot, the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Object recognition and categorization is a very challenging problem, as 3-D objects often give rise to ambiguous, 2-D views. Here, we present a perception-driven exploration and recognition scheme for in-hand object recognition implemented on the iCub humanoid robot. In this setup, the robot actively seeks out object views to optimize the exploration sequence. This is achieved by regarding the object recognition problem as a localization problem. We search for the most likely viewpoint position on the viewsphere of all objects. This problem can be solved efficiently using a particle filter that fuses visual cues with associated motor actions. Based on the state of the filter, we can predict the next best viewpoint after each recognition step by searching for the action that leads to the highest expected information gain. We conduct extensive evaluations of the proposed system in simulation as well as on the actual robot and show the benefit of perception-driven exploration over passive, vision-only processes at discriminating between highly similar objects. We demonstrate that objects are recognized faster and at the same time with a higher accuracy.