English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Compensating class imbalance for acoustic chimpanzee detection with convolutional recurrent neural networks (advance online)

MPS-Authors
/persons/resource/persons104419

Kalan,  Ammie K.
Great Ape Evolutionary Ecology and Conservation, Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;
Chimpanzees, Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;
Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;

/persons/resource/persons72806

Kühl,  Hjalmar S.
Great Ape Evolutionary Ecology and Conservation, Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;
Chimpanzees, Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;
Department of Primatology, Max Planck Institute for Evolutionary Anthropology, Max Planck Society;

External Resource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Anders, F., Kalan, A. K., Kühl, H. S., & Fuchs, M. (2021). Compensating class imbalance for acoustic chimpanzee detection with convolutional recurrent neural networks (advance online). Ecological Informatics, 65: 101423. doi:10.1016/j.ecoinf.2021.101423.


Cite as: http://hdl.handle.net/21.11116/0000-0009-6A51-5
Abstract
Automatic detection systems are important in passive acoustic monitoring (PAM) systems, as these record large amounts of audio data which are infeasible for humans to evaluate manually. In this paper we evaluated methods for compensating class imbalance for deep-learning based automatic detection of acoustic chimpanzee calls. The prevalence of chimpanzee calls in natural habitats is very rare, i.e. databases feature a heavy imbalance between background and target calls. Such imbalances can have negative effects on classifier performances. We employed a state-of-the-art detection approach based on convolutional recurrent neural networks (CRNNs). We extended the detection pipeline through various stages for compensating class imbalance. These included (1) spectrogram denoising, (2) alternative loss functions, and (3) resampling. Our key findings are: (1) spectrogram denoising operations significantly improved performance for both target classes, (2) standard binary cross entropy reached the highest performance, and (3) manipulating relative class imbalance through resampling either decreased or maintained performance depending on the target class. Finally, we reached detection performances of 33%F1 for drumming and 5%F1 for vocalization, which is a >7 fold increase compared to previously published results. We conclude that supporting the network to learn decoupling noise conditions from foreground classes is of primary importance for increasing performance.