Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Reverse Correlation In Temporal Facs Space Reveals Diagnostic Information During Dynamic Emotional Expression Classification

MPG-Autoren
/persons/resource/persons83829

Breidt,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83871

Curio,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Garrod, O., Yu, H., Breidt, M., Curio, C., & Schyns, P. (2010). Reverse Correlation In Temporal Facs Space Reveals Diagnostic Information During Dynamic Emotional Expression Classification. Poster presented at 10th Annual Meeting of the Vision Sciences Society (VSS 2010), Naples, FL, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C096-8
Zusammenfassung
Reverse correlation experiments have previously revealed the locations of facial features crucial for recognition of different emotional expressions, and related these features to brain electrophysiological activity [SchynsEtal07]. However, in social perception we expect the generation and encoding of communicative signals to share a common framework in the brain [SeyfarthCheney03] and neither ‘Bubbles’ [GosselinSchyns03] nor white noise based manipulation effectively target the input features underlying facial expression generation - the combined activation of sets of facial muscles over time. [CurioEtal06] propose a motion-retargeting method that controls the appearance of facial expression stimuli via a linear 3D Morphable Model [BlanzVetter99] composed of recorded Action Units (AUs). Each AU represents the surface deformation of the face, given the full activation of a particular muscle or muscle group taken from the FACS [EkmanFriesen79] system. The set of weighted linear combinations of AUs are hypothesised as a generative model for the set of typical facial movements for this actor.
Here we report the outcome of a facial emotion reverse correlation experiment with one such generative AU model over a space of temporally parameterized AU weights. On each trial, a random selection of between 1 and 5 AUs are selected. Random timecourses for selected AUs are generated according to 6 temporal parameters (see supplementary figure). The observer rates the stimulus for each of the 6 ‘universal emotions’ on a continuous confidence scale from 0 to 1 and, from these ratings, optimal AU timecourses (timecourses whose temporal parameters maximize the expected rating for a given expression) are derived per expression and AU. These are then fed as weights into the AU model to reveal the feature dynamics associated with the expression. This method extends Bubbles and reverse correlation techniques to a relevant input space – one that makes explicit hypotheses about the temporal structure of diagnostic information.