English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

The intrinsic reward of sensory experiences

MPS-Authors
/persons/resource/persons220274

Brielmann,  A       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons260302

Berentelg,  M       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons217460

Dayan,  P       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Brielmann, A., Berentelg, M., & Dayan, P. (2022). The intrinsic reward of sensory experiences. Talk presented at Annual Meeting of the Society for NeuroEconomics (SNE 2022). Arlington, VA, USA. 2022-09-30 - 2022-10-02.


Cite as: https://hdl.handle.net/21.11116/0000-000B-0CF2-7
Abstract
Objective: Listening to music, watching a sunset, eating your favorite ice cream even when sated - all of these sensory experiences are rewarding in and by themselves. Why? And why does this differ so much between individuals and across time? We propose that particular sensory experiences are intrinsically rewarding because they serve the ethologically-grounded task of fashioning a sensory system that effectively processes objects that it expects to encounter, both now and in the future. We discuss a recent theory and computational model in which the sensory system comprises a generative model of objects in the sensory environment. This system is shaped through learning occasioned by the objects the observer encounters. Two interlinked components generate intrinsic sensory value: immediate sensory reward from fluency operationalized as the likelihood of the current object given the observer's state, and the reward of learning, operationalized as the change in expected future reward. Methods: We report findings from a simple image rating task in which participants (N = 59) rate how much they like a set of dog images (n = 55) that we created from seven source images in a rigorously controlled manner using the NeuralCrossbreed morphing algorithm. Following our theoretical assumption that object recognition and sensory valuation are linked, we derive stimulus feature representations from deep neural nets pretrained on image recognition (e.g., VGG-16). Results: A full realization of our model is able to capture liking judgments on a trial-by-trial basis (median r = 0.65) and far outperforms predictions based on population averages (median r = 0.01; comparison of prediction errors for held out trials p < 0.001, BF = 7.8*108). In addition, we show image sequence dependent changes in liking ratings that justify the learning component of our model: The model explains on average 20% less variance for simulated random trial orders compared to the true trial order (pairwise comparison W = 7.0, p < 0.001). Conclusions: In sum, we show that a computational model can capture the dynamics of individual sensory value judgments. The components of our theory map directly onto those of conventional reinforcement learning-based accounts of decision making, offering the opportunity for understanding how primary, secondary, and sensory rewards jointly drive behavior.