English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Discovering functional units in continuous speech

MPS-Authors
/persons/resource/persons145131

Lim,  Sung-Joo
Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA;
Max Planck Research Group Auditory Cognition, MPI for Human Cognitive and Brain Sciences, Max Planck Society;
Center for the Neural Basis of Cognition, University of Pittsburgh, PA, USA;

External Ressource
No external resources are shared
Fulltext (public)

Fulltext
(Any fulltext), 2KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Lim, S.-J., Lacerda, F., & Holt, L. L. (2015). Discovering functional units in continuous speech. Journal of Experimental Psychology: Human Perception and Performance, 41(4), 1139-1152. doi:10.1037/xhp0000067.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0028-FD0D-3
Abstract
Language learning requires that listeners discover acoustically variable functional units like phonetic categories and words from an unfamiliar, continuous acoustic stream. Although many category learning studies have examined how listeners learn to generalize across the acoustic variability inherent in the signals that convey the functional units of language, these studies have tended to focus upon category learning across isolated sound exemplars. However, continuous input presents many additional learning challenges that may impact category learning. Listeners may not know the timescale of the functional unit, its relative position in the continuous input, or its relationship to other evolving input regularities. Moving laboratory-based studies of isolated category exemplars toward more natural input is important to modeling language learning, but very little is known about how listeners discover categories embedded in continuous sound. In 3 experiments, adult participants heard acoustically variable sound category instances embedded in acoustically variable and unfamiliar sound streams within a video game task. This task was inherently rich in multisensory regularities with the to-be-learned categories and likely to engage procedural learning without requiring explicit categorization, segmentation, or even attention to the sounds. After 100 min of game play, participants categorized familiar sound streams in which target words were embedded and generalized this learning to novel streams as well as isolated instances of the target words. The findings demonstrate that even without a priori knowledge, listeners can discover input regularities that have the best predictive control over the environment for both non-native speech and nonspeech signals, emphasizing the generality of the learning.