日本語
 
User Manual Privacy Policy ポリシー/免責事項 連絡先
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Discovering Object Classes from Activities

MPS-Authors
/persons/resource/persons85108

Srikantha,  Abhilash
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

URL
There are no locators available
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Srikantha, A., & Gall, J. (2014). Discovering Object Classes from Activities. In D., Fleet, T., Pajdla, B., Schiele, & T., Tuytelaars (Eds.), Computer Vision - ECCV 2014. 3th European Conference on Computer Vision. Proceedings, Part VI (pp. 415-430). Cham et al.: Springer International Publishing. doi:10.1007/978-3-319-10599-4_27.


引用: http://hdl.handle.net/11858/00-001M-0000-0024-E2AB-1
要旨
In order to avoid an expensive manual labeling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visual similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in these videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.