Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

EndNote (UTF-8)
 
DownloadE-Mail
  Number it: Temporal Grounding Videos like Flipping Manga

Wu, Y., Hu, X., Sun, Y., Zhou, Y., Zhu, W., Rao, F., et al. (2024). Number it: Temporal Grounding Videos like Flipping Manga. Retrieved from https://arxiv.org/abs/2411.10332.

Item is

Basisdaten

ausblenden:
Genre: Forschungspapier

Dateien

ausblenden: Dateien
:
arXiv:2411.10332.pdf (Preprint), 3MB
Name:
arXiv:2411.10332.pdf
Beschreibung:
File downloaded from arXiv at 2024-11-25 12:21
OA-Status:
Keine Angabe
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

ausblenden:
 Urheber:
Wu, Yongliang1, Autor
Hu, Xinting2, Autor           
Sun, Yuyang1, Autor
Zhou, Yizhou1, Autor
Zhu, Wenbo1, Autor
Rao, Fengyun1, Autor
Schiele, Bernt2, Autor                 
Yang, Xu1, Autor
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Video Large Language Models (Vid-LLMs) have made remarkable advancements in
comprehending video content for QA dialogue. However, they struggle to extend
this visual understanding to tasks requiring precise temporal localization,
known as Video Temporal Grounding (VTG). To address this gap, we introduce
Number-Prompt (NumPro), a novel method that empowers Vid-LLMs to bridge visual
comprehension with temporal grounding by adding unique numerical identifiers to
each video frame. Treating a video as a sequence of numbered frame images,
NumPro transforms VTG into an intuitive process: flipping through manga panels
in sequence. This allows Vid-LLMs to "read" event timelines, accurately linking
visual content with corresponding temporal information. Our experiments
demonstrate that NumPro significantly boosts VTG performance of top-tier
Vid-LLMs without additional computational cost. Furthermore, fine-tuning on a
NumPro-enhanced dataset defines a new state-of-the-art for VTG, surpassing
previous top-performing methods by up to 6.9\% in mIoU for moment retrieval and
8.5\% in mAP for highlight detection. The code will be available at
https://github.com/yongliang-wu/NumPro.

Details

ausblenden:
Sprache(n): eng - English
 Datum: 2024-11-152024
 Publikationsstatus: Online veröffentlicht
 Seiten: 11 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2411.10332
URI: https://arxiv.org/abs/2411.10332
BibTex Citekey: Wu_2411.10332
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: