hide
Free keywords:
Phoneme discrimination; Speaker variability; Gamification; Online experiment
Abstract:
Listeners can effortlessly understand speech from any speaker, which is remarkable given the enormous acoustic variability and lack of invariant features corresponding to phonemes across speakers. Recently, it has been proposed that listeners use voice information to adapt to speakers (Kleinschmidt & Jaeger, 2015), which would reduce acoustic variability and explain why listeners can understand speech robustly from different speakers.
In the current study, we investigate whether adult listeners rely on voice information to adapt to speakers by testing the effect of speaker variability on phoneme processing. Specifically, we will examine whether there is a processing cost for listening to multiple speakers as compared to a single speaker. Given that listeners effortlessly understand different speakers, we will use a crowd-science approach to increase our sample-size to capture potentially subtle effects of speaker variability on phoneme processing.
To test phoneme discrimination, we will select two phoneme contrasts that are equally far separated in acoustic space within speakers, but unequally far separated across speakers, by capitalizing on an analysis that quantifies how much influence speaker variability has on acoustic distance between phoneme contrasts in English (Bergmann et al., 2016). We will test the discrimination of these two phoneme contrasts in a single-speaker and a multiple speaker condition, in a web-based XAB discrimination experiment, which we will gamify in order to keep participants’ attention. We will present the experimental design, along with the conceptualization of the game.
If listeners adapt to speakers based on voice information, there should be higher processing costs in the multiple-speaker condition, and this processing cost should be highest for the phoneme contrast that is furthest separated in acoustic space across speakers.