English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Learning and Recognizing 3D Objects by Combination of Visual and Proprioceptive Information

Browatzki, B. (2010). Learning and Recognizing 3D Objects by Combination of Visual and Proprioceptive Information. Poster presented at 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010), Heiligkreuztal, Germany.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-BDFC-8 Version Permalink: http://hdl.handle.net/21.11116/0000-0002-9D3F-8
Genre: Poster

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Browatzki, B1, 2, Author              
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: One major difficulty in computational object recognition lies in the fact that a 3D object can be seen from an infinite number of viewpoints. Thus, the issue arises that objects with different 3D shapes often share similar 2D views. Humans are able to resolve this kind of ambiguity by producing additional views through object manipulation or self movement. In both cases the action made provides proprioceptive information linking the visual information retrieved from the obtained views. Following this process, we combine visual and proprioceptive information to increase recognition performance of a computer vision system. In our approach we place a 3D model of an unknown object in the hand of a simulated anthropomorphic robot arm. The robot now executes a predefined exploratory movement to acquire a variety of different object views. To assure computational tractability, a subset of representative views is selected using the Keyframe concept by Wallraven et al. (2007). Each remaining frame is then annotated with the respective proprioceptive configuration of the robot arm and the transitions between these configurations are treated as links between object views. For recognizing objects this representation can be used to control the robot arm based on learned data. If both proprioceptive and visual data agree on a candidate, the object was recognized successfully. We investigated recognition performance using this method. The results show that the number of misclassified results decreases significantly as both sources â visual and proprioceptive â are available, thus demonstrating the importance of a combined space of visual and proprioceptive information.

Details

show
hide
Language(s):
 Dates: 2010-10
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: BibTex Citekey: 7086
 Degree: -

Event

show
hide
Title: 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010)
Place of Event: Heiligkreuztal, Germany
Start-/End Date: 2010-10-04 - 2010-10-06

Legal Case

show

Project information

show

Source 1

show
hide
Title: 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: 9 Start / End Page: 29 Identifier: -