日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

登録内容を編集ファイル形式で保存
 
 
ダウンロード電子メール
  Learning and Recognizing 3D Objects by Combination of Visual and Proprioceptive Information

Browatzki, B. (2010). Learning and Recognizing 3D Objects by Combination of Visual and Proprioceptive Information. Poster presented at 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010), Heiligkreuztal, Germany.

Item is

基本情報

表示: 非表示:
資料種別: ポスター

ファイル

表示: ファイル

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Browatzki, B1, 2, 著者           
所属:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

内容説明

表示:
非表示:
キーワード: -
 要旨: One major difficulty in computational object recognition lies in the fact that a 3D object can be seen from an infinite number of viewpoints. Thus, the issue arises that objects with different 3D shapes often share similar 2D views. Humans are able to resolve this kind of ambiguity by
producing additional views through object manipulation or self movement. In both cases the action made provides proprioceptive information linking the visual information retrieved from the obtained views. Following this process, we combine visual and proprioceptive information to increase recognition performance of a computer vision system. In our approach we place a 3D model of an unknown object in the hand of a simulated anthropomorphic robot arm. The robot now executes a predefined exploratory movement to acquire a variety of different object views. To assure computational tractability, a subset of representative views is selected using the Keyframe concept by Wallraven et al. (2007). Each remaining frame is then annotated with the respective proprioceptive configuration of the robot arm and the transitions between these configurations are treated as links between object views. For recognizing objects this representation can be used to control the robot arm based on learned data. If both proprioceptive and visual data agree on a candidate, the
object was recognized successfully. We investigated recognition performance using this method. The results show that the number of misclassified results decreases significantly as both sources â visual and proprioceptive â are available, thus demonstrating the importance of a combined space of visual and proprioceptive information.

資料詳細

表示:
非表示:
言語:
 日付: 2010-10
 出版の状態: 出版
 ページ: -
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): BibTex参照ID: 7086
 学位: -

関連イベント

表示:
非表示:
イベント名: 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010)
開催地: Heiligkreuztal, Germany
開始日・終了日: 2010-10-04 - 2010-10-06

訴訟

表示:

Project information

表示:

出版物 1

表示:
非表示:
出版物名: 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010)
種別: 会議論文集
 著者・編者:
所属:
出版社, 出版地: -
ページ: - 巻号: - 通巻号: 9 開始・終了ページ: 29 識別子(ISBN, ISSN, DOIなど): -