English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Preprint

Symmetry and generalization in local learning of predictive representations

MPS-Authors
/persons/resource/persons267577

Keck,  Janis
Department Psychology (Doeller), MPI for Human Cognitive and Brain Sciences, Max Planck Society;

/persons/resource/persons221475

Doeller,  Christian F.       
Department Psychology (Doeller), MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Keck_pre_v2.pdf
(Preprint), 8MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Keck, J., Barry, C., Doeller, C. F., & Jost, J. (2024). Symmetry and generalization in local learning of predictive representations. bioRxiv. doi:10.1101/2024.05.27.595705.


Cite as: https://hdl.handle.net/21.11116/0000-000F-61F9-B
Abstract
In spatial cognition, the Successor Representation (SR) from reinforcement learning provides a compelling candidate of how predictive representations are used to encode space. In particular, hippocampal place cells are hypothesized to encode the SR. Here, we investigate how varying the temporal symmetry in learning rules influences those representations. To this end, we use a simple local learning rule which can be made insensitive to the temporal order. We analytically find that a symmetric learning rule rule results in a successor representation under a symmetrized version of the experienced transition structure. We then apply this rule to a two-layer neural network model loosely resembling hippocampal subfields CA3 - with a symmetric learning rule and recurrent weights - and CA1 - with an asymmetric learning rule and no recurrent weights. Here, when exposed repeatedly to a linear track, neurons in our model in CA3 show less shift of the centre of mass than those in CA1, in line with existing empirical findings. Investigating the functional benefits of such symmetry, we find that using a symmetric learning rule yields representations which afford better generalization, when a model is probed to navigate to a new target without relearning the SR. This effect is reversed when the state space is not symmetric anymore. Thus, our results hint at a potential benefit of the inductive bias afforded by symmetric learning rules in areas employed in spatial navigation, where there naturally is a symmetry in the state space.