hide
Free keywords:
-
Abstract:
Cognitive maps associate complex, high-dimensional stimuli, such as visual percepts, with abstract latent vari- ables, such as spatial locations. It has been argued that animals use these abstract spatial features to navigate and infer novel shortcuts in their environments. Employing information theory, we present a simple computational account of how such abstract spatial representations may emerge from interactions with high-dimensional stimuli: We show that learning world models in which the dynamics are parsimonious and apply (approximately) indepen- dently of the state in an abstract state space give rise to spatial concepts, improved performance in navigation tasks and neural representations of space found in humans and rodents. Specifically, we show that artificial agents equipped with such world models i) outperform agents with non-parsimonious world models in planning tasks where they need to extrapolate about the dynamics of novel parts of the environment, and ii) learn latent state spaces that afford faster policy learning. By constructing latent spaces in which a small set of dynamical laws hold independently of where the agent may be in this space, our model uses a simple computational principle to explain how geometric representations of high- dimensional worlds emerge. Additionally, these parsimonious world models allow for systematic generalization about transition dynamics in a way reminiscent of those of an- imals, and provide an information-theoretic account of the emergence of the neural representations associated with these abilities.