日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

Theta and gamma bands encode acoustic dynamics over wide-ranging timescales

MPS-Authors
/persons/resource/persons212714

Teng,  Xiangbin
Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society;

/persons/resource/persons173724

Poeppel,  David
Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society;
Department of Psychology , New York University;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Teng, X., & Poeppel, D. (2020). Theta and gamma bands encode acoustic dynamics over wide-ranging timescales. Cerebral Cortex, 30(4), 2600-2614. doi:10.1093/cercor/bhz263.


引用: https://hdl.handle.net/21.11116/0000-0006-6BD1-6
要旨
Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditorysystem encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al.2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acousticdynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures overtimescales from∼200 to∼30 ms and investigated temporal coding on different timescales. Participants discriminatedsounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Althoughconsiderable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analysesreveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands,but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta andgamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporalcoding with comparable capacity. Our findings provide a novel perspective—acoustic information of all timescales isdiscretised into two discrete temporal chunks for further perceptual analysis.