English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

Kümmerer, M., Theis, L., & Bethge, M. (2014). Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet. In International Conference on Learning Representations (ICLR 2015) (pp. 1-12).

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0027-7FA7-E Version Permalink: http://hdl.handle.net/21.11116/0000-0000-8212-8
Genre: Conference Paper

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/abs/1411.1045 (Publisher version)
Description:
-

Creators

show
hide
 Creators:
Kümmerer, M, Author
Theis, L1, Author              
Bethge, M1, Author              
Affiliations:
1Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497805              

Content

show
hide
Free keywords: -
 Abstract: Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of Krizhevsky et al., 2012, we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes.

Details

show
hide
Language(s):
 Dates: 2014-05-08
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: -
 Identifiers: URI: http://arxiv.org/abs/1411.1045
BibTex Citekey: KummererTB2014
 Degree: -

Event

show
hide
Title: International Conference on Learning Representations (ICLR 2015)
Place of Event: San Diego, CA, USA
Start-/End Date: -

Legal Case

show

Project information

show

Source 1

show
hide
Title: International Conference on Learning Representations (ICLR 2015)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1 - 12 Identifier: -