Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  DeepGaze II: Explaining nearly all information in image-based saliency using features trained on object detection

Kümmerer, M., Wallis, T., & Bethge, M. (2016). DeepGaze II: Explaining nearly all information in image-based saliency using features trained on object detection. Poster presented at Bernstein Conference 2016, Berlin, Germany.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
Link (beliebiger Volltext)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Kümmerer, M, Autor           
Wallis, TSA, Autor
Bethge, M1, Autor           
Affiliations:
1External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: When free-viewing scenes, the first few fixations of human observers are driven in part by bottom-up attention. Over the last decade a large number of models have been proposed to explain these fixations. One problem the field is facing is that the different metrics used to evaluate model performance produce very different rankings for the models. We recently standardized model comparison using an information-theoretic framework and found that existing models captured at most 1/3 of the explainable mutual information between image content and the fixation locations, which might be partially due to the limited data available [1]. Subsequently, we tried to tackle this limitation using a transfer learning strategy. Our model "DeepGaze I" uses a neural network (AlexNet, [2]) that was originally trained for object detection on the ImageNet dataset. It achieved a large improvement over the previous state of the art, explaining 56 of the explainable information [3] (Figure 1c). In the meantime, a new generation of object recognition models have since been developed, substantially outperforming AlexNet. The success of “DeepGaze I” and similar models suggests that features that yield good object detection performance can be exploited for better saliency prediction, and that object detection and fixation prediction performances are correlated. Here we test this hypothesis. Our new model "DeepGaze II" uses the VGG network [4] to convert an image into a high dimensional representation, which is then fed through a second, smaller network to yield a density prediction. The second network is pre-trained using maximum-likelihood on the SALICON dataset and fine-tuned on the MIT1003 dataset. Remarkably, DeepGaze II explains 83 of the explainable information on held out data (Figure 1c), and has since achieved top performance on the MIT Saliency Benchmark. The problem of predicting spatial fixation densities under free-viewing conditions could be solved very soon.
What makes DeepGaze predictions different? Models before DeepGaze were not only close in performance but also very similar in their predictions, clustering mostly around a simple mean-contrast-luminance model (MLC, Figure 1d). Prediction performance over time shows that DeepGaze II is especially successful at explaining fixations in the first 600ms (Figure 1e). The fact that fixation prediction performance is closely tied to object detection informs theories of attentional selection in scene viewing.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2016-09-22
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.12751/nncn.bc2016.0132
BibTex Citekey: KummererWB2016
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Bernstein Conference 2016
Veranstaltungsort: Berlin, Germany
Start-/Enddatum: 2016-09-21 - 2016-09-23

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Bernstein Conference 2016
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 141 - 142 Identifikator: -