Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Decoding emotion perception from single-trial distributed brain activation


Thielscher,  A
Former Department MRZ, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Padmala, S., Thielscher, A., & Pessoa, L. (2006). Decoding emotion perception from single-trial distributed brain activation. Poster presented at 36th Annual Meeting of the Society for Neuroscience (Neuroscience 2006), Atlanta, GA, USA.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CFE9-5
Functional MRI data are typically analyzed in a subtractive-univariate fashion. In the present study, we utilized machine learning algorithms to “decode” brain states during the viewing of faces with emotional expressions. Subjects (n = 19) performed a two-choice task in which they decided if a briefly presented face (80 ms) displayed either a fearful or a disgusted expression while fMRI data were acquired (1.5T). Trials occurred every 15 s in a slow event-related design. Graded, computer-morphed levels of the emotional expressions were obtained by morphing neutral and fearful expressions and, separately, neutral and disgusted expressions. The final graded-stimulus series contained 100, 75 and 37 fearful faces, neutral face, 37, 75 and 100 disgusted faces. When predicting the stimulus viewed by the participant, we considered voxels from five ROIs that exhibited strong task activation: middle occipital gyrus, fusiform gyrus, IPS, anterior insula, and inferior frontal sulcus. For prediction, we employed standard linear Support Vector Machines (SVM). When the SVM was trained on 100 stimuli, prediction accuracy (i.e., correctly classifying the stimulus as fearful or disgusted) averaged 76.4 correct (assessed via k-fold cross validation) and exceeded 85 for 8 subjects. Next, we tested how well a machine trained on the 100 stimuli would perform with graded stimuli (in such cases, the SVM was never trained with graded stimuli). For 75 graded stimuli, classification accuracy was 65.8; for 37 graded stimuli, it was 59.3. In all cases, prediction accuracy was best when a small subset of voxels (7 on average) was used. Our results show that we can employ the distributed pattern of single-trial activation to predict the stimulus viewed by the participant. In addition, training on 100 stimuli could be used to predict the perception of graded stimuli, demonstrating that the SVM learned features that generalized across perceptual conditions.