日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

Preprint

Symmetry and generalization in local learning of predictive representations

MPS-Authors
/persons/resource/persons267577

Keck,  Janis
Department Psychology (Doeller), MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons221475

Doeller,  Christian F.       
Department Psychology (Doeller), MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

Keck_pre.pdf
(プレプリント), 8MB

付随資料 (公開)
There is no public supplementary material available
引用

Keck, J., Barry, C., Doeller, C. F., & Jost, J. (2024). Symmetry and generalization in local learning of predictive representations. bioRxiv. doi:10.1101/2024.05.27.595705.


引用: https://hdl.handle.net/21.11116/0000-000F-61F9-B
要旨
It is an increasingly accepted view that the representations which the brain generates are not merely descriptive of the current state of the world; rather, representations serve a predictive purpose. In spatial cognition, the Successor Representation (SR) from reinforcement learning provides a compelling candidate of how such predictive representations are used to encode space, in particular, hippocampal place cells are assumed to encode the SR. Here, we investigate how varying the temporal symmetry in learning rules influences those representations. To this end, we use a simple local learning rule which can be made insensitive to the temporal order. We analytically find that a symmetric learning rule results in a successor representation under a symmetrized version of the experienced transition structure. We then apply this rule to a two-layer neural network model loosely resembling hippocampal subfields CA3 - with a symmetric learning rule and recurrent weights - and CA1 - with an asymmetric learning rule and no recurrent weights. Here, when exposed repeatedly to a linear track, CA3 neurons in our model show less shift of the centre of mass than those in CA1, in line with existing empirical findings - an effect which is not observed using an asymmetric learning rule. We further-more investigate the functional benefit of such representations in simple RL navigation tasks. Here, we find that using a symmetric learning rule yields representations which afford better generalization, when a model is probed to navigate to a new target without relearning the SR. This effect is reversed when the state space is not symmetric anymore. Thus, our results hint at a potential benefit of the inductive bias afforded by symmetric learning rules in areas employed in spatial navigation, where there naturally is a symmetry in the state space. In conclusion, we expand the SR theory of hippocampus by including symmetry in SR learning, which might yield an advantageous inductive bias for learning in space.