English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural Sounds

Dai, D., Vasudevan, A. B., Matas, J., & Van Gool, L. (2021). Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural Sounds. Retrieved from https://arxiv.org/abs/2109.02763.

Item is

Files

show Files
hide Files
:
arXiv:2109.02763.pdf (Preprint), 16MB
 
File Permalink:
-
Name:
arXiv:2109.02763.pdf
Description:
File downloaded from arXiv at 2021-09-28 06:52 Journal extension of our ECCV'20 Paper -- 15 pages. arXiv admin note: substantial text overlap with arXiv:2003.04210
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Dai, Dengxin1, Author           
Vasudevan, Arun Balajee2, Author
Matas, Jiri2, Author
Van Gool, Luc2, Author
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Sound, cs.SD,Computer Science, Computer Vision and Pattern Recognition, cs.CV,eess.AS
 Abstract: Humans can robustly recognize and localize objects by using visual and/or
auditory cues. While machines are able to do the same with visual data already,
less work has been done with sounds. This work develops an approach for scene
understanding purely based on binaural sounds. The considered tasks include
predicting the semantic masks of sound-making objects, the motion of
sound-making objects, and the depth map of the scene. To this aim, we propose a
novel sensor setup and record a new audio-visual dataset of street scenes with
eight professional binaural microphones and a 360-degree camera. The
co-existence of visual and audio cues is leveraged for supervision transfer. In
particular, we employ a cross-modal distillation framework that consists of
multiple vision teacher methods and a sound student method -- the student
method is trained to generate the same results as the teacher methods do. This
way, the auditory system can be trained without using human annotations. To
further boost the performance, we propose another novel auxiliary task, coined
Spatial Sound Super-Resolution, to increase the directional resolution of
sounds. We then formulate the four tasks into one end-to-end trainable
multi-tasking network aiming to boost the overall performance. Experimental
results show that 1) our method achieves good results for all four tasks, 2)
the four tasks are mutually beneficial -- training them together achieves the
best performance, 3) the number and orientation of microphones are both
important, and 4) features learned from the standard spectrogram and features
obtained by the classic signal processing pipeline are complementary for
auditory perception tasks. The data and code are released.

Details

show
hide
Language(s): eng - English
 Dates: 2021-09-062021
 Publication Status: Published online
 Pages: 15 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2109.02763
BibTex Citekey: Dai2109.02763
URI: https://arxiv.org/abs/2109.02763
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show