English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Speech fine structure contains critical temporal cues to support speech segmentation

Teng, X., Cogan, G., & Poeppel, D. (2019). Speech fine structure contains critical temporal cues to support speech segmentation. NeuroImage, 202: 116152. doi:10.1016/j.neuroimage.2019.116152.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/21.11116/0000-0003-6938-9 Version Permalink: http://hdl.handle.net/21.11116/0000-0005-3C37-B
Genre: Journal Article

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Teng, Xiangbin1, Author              
Cogan, Gregory2, Author
Poeppel, David1, 3, Author              
Affiliations:
1Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society, ou_2421697              
2Department of Neurosurgery, Duke University, Durham, NC, USA, 27710, ou_persistent22              
3Department of Psychology, New York University, New York, NY, USA, 10003, ou_persistent22              

Content

show
hide
Free keywords: Speech segmentation Cortical entrainment Spectral correlation Spectro-temporal
 Abstract: Segmenting the continuous speech stream into units for further perceptual and linguistic analyses is fundamental to speech recognition. The speech amplitude envelope (SE) has long been considered a fundamental temporal cue for segmenting speech. Does the temporal fine structure (TFS), a significant part of speech signals often considered to contain primarily spectral information, contribute to speech segmentation? Using magnetoencephalography, we show that the TFS entrains cortical oscillatory responses between 3-6 Hz and demonstrate, using mutual information analysis, that (i) the temporal information in the TFS can be reconstructed from a measure of frame-to-frame spectral change and correlates with the SE and (ii) that spectral resolution is key to the extraction of such temporal information. Furthermore, we show behavioural evidence that, when the SE is temporally distorted, the TFS provides cues for speech segmentation and aids speech recognition significantly. Our findings show that it is insufficient to investigate solely the SE to understand temporal speech segmentation, as the SE and the TFS derived from a band-filtering method convey comparable, if not inseparable, temporal information. We argue for a more synthetic view of speech segmentation — the auditory system groups speech signals coherently in both temporal and spectral domains.

Details

show
hide
Language(s): eng - English
 Dates: 2019-08-102018-12-292019-08-312019-09-012019-11-15
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Method: Peer
 Identifiers: DOI: 10.1016/j.neuroimage.2019.116152
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: NeuroImage
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Orlando, FL : Academic Press
Pages: - Volume / Issue: 202 Sequence Number: 116152 Start / End Page: - Identifier: ISSN: 1053-8119
CoNE: https://pure.mpg.de/cone/journals/resource/954922650166