日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  Visual Decoding of Targets During Visual Search From Human Eye Fixations

Sattar, H., Fritz, M., & Bulling, A. (2017). Visual Decoding of Targets During Visual Search From Human Eye Fixations. Retrieved from http://arxiv.org/abs/1706.05993.

Item is

基本情報

表示: 非表示:
資料種別: 成果報告書

ファイル

表示: ファイル
非表示: ファイル
:
arXiv:1706.05993.pdf (プレプリント), 4MB
ファイルのパーマリンク:
https://hdl.handle.net/11858/00-001M-0000-002D-8B52-0
ファイル名:
arXiv:1706.05993.pdf
説明:
File downloaded from arXiv at 2017-07-05 11:05
OA-Status:
閲覧制限:
公開
MIMEタイプ / チェックサム:
application/pdf / [MD5]
技術的なメタデータ:
著作権日付:
-
著作権情報:
-
CCライセンス:
http://arxiv.org/help/license

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Sattar, Hosnieh1, 著者           
Fritz, Mario1, 著者           
Bulling, Andreas1, 著者           
所属:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

内容説明

表示:
非表示:
キーワード: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Human-Computer Interaction, cs.HC
 要旨: What does human gaze reveal about a users' intents and to which extend can these intents be inferred or even visualized? Gaze was proposed as an implicit source of information to predict the target of visual search and, more recently, to predict the object class and attributes of the search target. In this work, we go one step further and investigate the feasibility of combining recent advances in encoding human gaze information using deep convolutional neural networks with the power of generative image models to visually decode, i.e. create a visual representation of, the search target. Such visual decoding is challenging for two reasons: 1) the search target only resides in the user's mind as a subjective visual pattern, and can most often not even be described verbally by the person, and 2) it is, as of yet, unclear if gaze fixations contain sufficient information for this task at all. We show, for the first time, that visual representations of search targets can indeed be decoded only from human gaze fixations. We propose to first encode fixations into a semantic representation and then decode this representation into an image. We evaluate our method on a recent gaze dataset of 14 participants searching for clothing in image collages and validate the model's predictions using two human studies. Our results show that 62% (Chance level = 10%) of the time users were able to select the categories of the decoded image right. In our second studies we show the importance of a local gaze encoding for decoding visual search targets of user

資料詳細

表示:
非表示:
言語: eng - English
 日付: 2017-06-192017-06-212017
 出版の状態: オンラインで出版済み
 ページ: 9 p.
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): arXiv: 1706.05993
URI: http://arxiv.org/abs/1706.05993
BibTex参照ID: DBLP:journals/corr/SattarFB17
 学位: -

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物

表示: