English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Auditory information coding by modeled cochlear nucleus neurons

Wang, H., Isik, M., Borst, A., & Hemmert, W. (2011). Auditory information coding by modeled cochlear nucleus neurons. Journal of Computational Neuroscience, 30(3), 529-542. doi:10.1007/s10827-010-0276-x.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Wang, H.1, Author
Isik, M.1, Author
Borst, A.2, Author           
Hemmert, W.1, Author
Affiliations:
1[Wang, Huan; Isik, Michael; Hemmert, Werner] Tech Univ Munich, Inst Med Engn, D-85748 Garching, Germany.; [Wang, Huan; Hemmert, Werner] Infineon Technol AG, Neubiberg, Germany.; [Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner] Bernstein Ctr Computat Neurosci, Munich, Germany., ou_persistent22              
2Department: Systems and Computational Neurobiology / Borst, MPI of Neurobiology, Max Planck Society, ou_1113548              

Content

show
hide
Free keywords: Onset neuron; Information theory; Neural coding; Automatic speech recognition; Temporal resolution
 Abstract: In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 mu s). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

Details

show
hide
Language(s): eng - English
 Dates: 2011-06
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: eDoc: 564872
ISI: 000291253400002
DOI: 10.1007/s10827-010-0276-x
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Computational Neuroscience
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Boston : Kluwer Academic Publishers
Pages: - Volume / Issue: 30 (3) Sequence Number: - Start / End Page: 529 - 542 Identifier: ISSN: 0929-5313
CoNE: https://pure.mpg.de/cone/journals/resource/954925568787