Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Where and When: Space-Time Attention for Audio-Visual Explanations

Chen, Y., Hummel, T., Koepke, A. S., & Akata, Z. (2021). Where and When: Space-Time Attention for Audio-Visual Explanations. Retrieved from https://arxiv.org/abs/2105.01517.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : Where and When: {S}pace-Time Attention for Audio-Visual Explanations

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:2105.01517.pdf (Preprint), 7MB
Name:
arXiv:2105.01517.pdf
Beschreibung:
File downloaded from arXiv at 2021-11-29 08:46
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Chen, Yanbei1, Autor
Hummel, Thomas1, Autor
Koepke, A. Sophia1, Autor
Akata, Zeynep2, Autor           
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Artificial Intelligence, cs.AI,Computer Science, Learning, cs.LG
 Zusammenfassung: Explaining the decision of a multi-modal decision-maker requires to determine
the evidence from both modalities. Recent advances in XAI provide explanations
for models trained on still images. However, when it comes to modeling multiple
sensory modalities in a dynamic world, it remains underexplored how to
demystify the mysterious dynamics of a complex multi-modal model. In this work,
we take a crucial step forward and explore learnable explanations for
audio-visual recognition. Specifically, we propose a novel space-time attention
network that uncovers the synergistic dynamics of audio and visual data over
both space and time. Our model is capable of predicting the audio-visual video
events, while justifying its decision by localizing where the relevant visual
cues appear, and when the predicted sounds occur in videos. We benchmark our
model on three audio-visual video event datasets, comparing extensively to
multiple recent multi-modal representation learners and intrinsic explanation
models. Experimental results demonstrate the clear superior performance of our
model over the existing methods on audio-visual video event recognition.
Moreover, we conduct an in-depth study to analyze the explainability of our
model based on robustness analysis via perturbation tests and pointing games
using human annotations.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2021-05-042021
 Publikationsstatus: Online veröffentlicht
 Seiten: 13 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 2105.01517
BibTex Citekey: Chen2105.01517
URI: https://arxiv.org/abs/2105.01517
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden: ausblenden:
Projektname : DEXIM
Grant ID : 853489
Förderprogramm : Horizon 2020 (H2020)
Förderorganisation : European Commission (EC)

Quelle

einblenden: