Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

Auditory information coding by modeled cochlear nucleus neurons


Borst,  A.
Department: Systems and Computational Neurobiology / Borst, MPI of Neurobiology, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Wang, H., Isik, M., Borst, A., & Hemmert, W. (2011). Auditory information coding by modeled cochlear nucleus neurons. Journal of Computational Neuroscience, 30(3), 529-542. doi:10.1007/s10827-010-0276-x.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0012-1EE1-D
In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 mu s). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.