Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Decoding emotion perception from single-trial distributed brain activation

MPG-Autoren
/persons/resource/persons84257

Thielscher,  A
Former Department MRZ, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Padmala, S., Thielscher, A., & Pessoa, L. (2006). Decoding emotion perception from single-trial distributed brain activation. Poster presented at 36th Annual Meeting of the Society for Neuroscience (Neuroscience 2006), Atlanta, GA, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CFE9-5
Zusammenfassung
Functional MRI data are typically analyzed in a subtractive-univariate fashion. In the present study, we utilized machine learning algorithms to “decode” brain states during the viewing of faces with emotional expressions. Subjects (n = 19) performed a two-choice task in which they decided if a briefly presented face (80 ms) displayed either a fearful or a disgusted expression while fMRI data were acquired (1.5T). Trials occurred every 15 s in a slow event-related design. Graded, computer-morphed levels of the emotional expressions were obtained by morphing neutral and fearful expressions and, separately, neutral and disgusted expressions. The final graded-stimulus series contained 100, 75 and 37 fearful faces, neutral face, 37, 75 and 100 disgusted faces. When predicting the stimulus viewed by the participant, we considered voxels from five ROIs that exhibited strong task activation: middle occipital gyrus, fusiform gyrus, IPS, anterior insula, and inferior frontal sulcus. For prediction, we employed standard linear Support Vector Machines (SVM). When the SVM was trained on 100 stimuli, prediction accuracy (i.e., correctly classifying the stimulus as fearful or disgusted) averaged 76.4 correct (assessed via k-fold cross validation) and exceeded 85 for 8 subjects. Next, we tested how well a machine trained on the 100 stimuli would perform with graded stimuli (in such cases, the SVM was never trained with graded stimuli). For 75 graded stimuli, classification accuracy was 65.8; for 37 graded stimuli, it was 59.3. In all cases, prediction accuracy was best when a small subset of voxels (7 on average) was used. Our results show that we can employ the distributed pattern of single-trial activation to predict the stimulus viewed by the participant. In addition, training on 100 stimuli could be used to predict the perception of graded stimuli, demonstrating that the SVM learned features that generalized across perceptual conditions.