English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Do I look like I'm sure? Partial metacognitive access to the low-level aspects of one's own facial expressions

MPS-Authors
/persons/resource/persons267581

Forster,  Carina
Department of Psychology, Humboldt University Berlin, Germany;
Bernstein Center for Computational Neuroscience, Berlin, Germany;
Berlin School of Mind and Brain, Humboldt University Berlin, Germany;
Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Ciston, A. B., Forster, C., Brick, T. R., Kühn, S., Verrel, J., & Filevich, E. (2022). Do I look like I'm sure? Partial metacognitive access to the low-level aspects of one's own facial expressions. Cognition, 225: 105155. doi:10.1016/j.cognition.2022.105155.


Cite as: https://hdl.handle.net/21.11116/0000-000A-70AA-8
Abstract
As humans we communicate important information through fine nuances in our facial expressions, but because conscious motor representations are noisy, we might not be able to report these fine movements. Here we measured the precision of the explicit metacognitive information that young adults have about their own facial expressions. Participants imitated pictures of themselves making facial expressions and triggered a camera to take a picture of them while doing so. They then rated how well they thought they imitated each expression. We defined metacognitive access to facial expressions as the relationship between objective performance (how well the two pictures matched) and subjective performance ratings. As a group, participants' metacognitive confidence ratings were only about four times less precise than their own similarity ratings. In turn, machine learning analyses revealed that participants' performance ratings were based on idiosyncratic subsets of features. We conclude that metacognitive access to one's own facial expressions is only partial.