Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

How to Find Interesting Locations in Video: A Spatiotemporal Interest Point Detector Learned from Human Eye movements

MPG-Autoren
/persons/resource/persons84012

Kienzle,  W
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83919

Franz,  MO
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Kienzle, W., Schölkopf, B., Wichmann, F., & Franz, M. (2007). How to Find Interesting Locations in Video: A Spatiotemporal Interest Point Detector Learned from Human Eye movements. In A. Hamprecht, C. Schnörr, & B. Jähne (Eds.), Pattern Recognition: 29th DAGM Symposium, Heidelberg, Germany, September 12-14, 2007 (pp. 405-414). Berlin, Germany: Springer.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CBE3-B
Zusammenfassung
Interest point detection in still images is a well-studied topic in computer vision.
In the spatiotemporal domain, however, it is still unclear which features indicate useful interest points. In this paper we approach the problem by emphlearning a detector from examples: we record eye movements of human subjects watching video sequences and train a neural network to predict which locations are likely to become eye movement targets. We show that our detector outperforms current spatiotemporal interest point architectures on a standard classification dataset.