hide
Free keywords:
-
Abstract:
Much work on spatial behaviour equates allocentric representations with strategies based on cognitive maps, and egocentric representations with taxon-like habits (Geerts et al., 2020). This has led to a focus on the hippocampus and the medial entorhinal cortex (MEC), which exhibit allocentric coding for aspects of space in rodents and beyond. However, egocentric representations are of particular value for aspects of policies defined relative to the self, and such deictic notions have been exploited in reinforcement learning (RL; Agre & Chapman, 1987, Finney et al., 2012). The lateral entorhinal cortex (LEC), which is involved in associative learning (Suter et al., 2018, Wilson et al., 2013, Tsao et al., 2018) and spatial processing (Hales et al., 2014), encodes the bearing of external items and boundaries in egocentric coordinates (Wang et al., 2018). This suggests that it might encode a similar sort of cognitive map as the MEC, but in an egocentric reference frame. Here, we build a reinforcement learning agent that combines a putatively LEC-based egocentric successor representation (SR; Dayan, 1993) with a conventional allocentric SR to navigate complex 2D environments. We demonstrate that the agent learns generalisable egocentric and allocentric value functions which can be composed additively to learn policies efficiently and to adapt to new environments quickly. Our work shows the benefit for the hippocampal formation to capture egocentric as well as allocentric relational structure.