English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural Sounds

MPS-Authors
/persons/resource/persons261420

Dai,  Dengxin
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dai, D., Vasudevan, A. B., Matas, J., & Van Gool, L. (2021). Binaural SoundNet: Predicting Semantics, Depth and Motion with Binaural Sounds. Retrieved from https://arxiv.org/abs/2109.02763.


Cite as: https://hdl.handle.net/21.11116/0000-0009-444C-6
Abstract
Humans can robustly recognize and localize objects by using visual and/or
auditory cues. While machines are able to do the same with visual data already,
less work has been done with sounds. This work develops an approach for scene
understanding purely based on binaural sounds. The considered tasks include
predicting the semantic masks of sound-making objects, the motion of
sound-making objects, and the depth map of the scene. To this aim, we propose a
novel sensor setup and record a new audio-visual dataset of street scenes with
eight professional binaural microphones and a 360-degree camera. The
co-existence of visual and audio cues is leveraged for supervision transfer. In
particular, we employ a cross-modal distillation framework that consists of
multiple vision teacher methods and a sound student method -- the student
method is trained to generate the same results as the teacher methods do. This
way, the auditory system can be trained without using human annotations. To
further boost the performance, we propose another novel auxiliary task, coined
Spatial Sound Super-Resolution, to increase the directional resolution of
sounds. We then formulate the four tasks into one end-to-end trainable
multi-tasking network aiming to boost the overall performance. Experimental
results show that 1) our method achieves good results for all four tasks, 2)
the four tasks are mutually beneficial -- training them together achieves the
best performance, 3) the number and orientation of microphones are both
important, and 4) features learned from the standard spectrogram and features
obtained by the classic signal processing pipeline are complementary for
auditory perception tasks. The data and code are released.