English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Emotional state dependence facilitates automatic imitation of visual speech

MPS-Authors
/persons/resource/persons19791

Kotz,  Sonja A.
Department of Neuropsychology and Psychopharmacology, Maastricht University, the Netherlands;
Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Virhia, J., Kotz, S. A., & Adank, P. (2019). Emotional state dependence facilitates automatic imitation of visual speech. Quarterly Journal of Experimental Psychology, 72(12), 2833-2847. doi:10.1177/1747021819867856.


Cite as: https://hdl.handle.net/21.11116/0000-0004-E5C9-7
Abstract
Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter’s emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer’s emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.