Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Lucid Data Dreaming for Multiple Object Tracking

Khoreva, A., Benenson, R., Ilg, E., Brox, T., & Schiele, B. (2017). Lucid Data Dreaming for Multiple Object Tracking. Retrieved from http://arxiv.org/abs/1703.09554.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Andere : Lucid Data Dreaming for Object Tracking

Dateien

einblenden: Dateien
ausblenden: Dateien
:
1703.09554v2 (Preprint), 12MB
Name:
1703.09554v2
Beschreibung:
File downloaded from arXiv at 2017-11-02 14:40
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Khoreva, Anna1, Autor           
Benenson, Rodrigo2, Autor           
Ilg, Eddy2, Autor
Brox, Thomas2, Autor
Schiele, Bernt1, Autor                 
Affiliations:
1Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Convolutional networks reach top quality in pixel-level object tracking but require a large amount of training data (1k ~ 10k) to deliver such results. We propose a new training strategy which achieves state-of-the-art results across three evaluation datasets while using 20x ~ 100x less annotated data than competing methods. Instead of using large training sets hoping to generalize across domains, we generate in-domain training data using the provided annotation on the first frame of each video to synthesize ("lucid dream") plausible future video frames. In-domain per-video training data allows us to train high quality appearance- and motion-based models, as well as tune the post-processing stage. This approach allows to reach competitive results even when training from only a single annotated frame, without ImageNet pre-training. Our results indicate that using a larger training set is not automatically better, and that for the tracking task a smaller training set that is closer to the target domain is more effective. This changes the mindset regarding how many training samples and general "objectness" knowledge are required for the object tracking task.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2017-03-282017
 Publikationsstatus: Online veröffentlicht
 Seiten: 17 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1703.09554
URI: http://arxiv.org/abs/1703.09554
BibTex Citekey: khoreva_lucid_dreams17
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: