日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems

MPS-Authors
/persons/resource/persons22662

Yildiz,  Burak
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department d'etudes cognitives, École normale supérieure, Paris, France;

/persons/resource/persons20071

von Kriegstein,  Katharina
Max Planck Research Group Neural Mechanisms of Human Communication, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Department of Psychology, Humboldt University Berlin, Germany;

/persons/resource/persons19770

Kiebel,  Stefan J.
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Hans Berger Clinic for Neurology, Jena University Hospital, Germany;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

Yildiz_FromBirdsong.pdf
(出版社版), 2MB

付随資料 (公開)
There is no public supplementary material available
引用

Yildiz, B., von Kriegstein, K., & Kiebel, S. J. (2013). From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems. PLoS Computational Biology, 9(9):. doi:10.1371/journal.pcbi.1003219.


引用: https://hdl.handle.net/11858/00-001M-0000-0014-598C-A
要旨
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.