hide
Free keywords:
-
Abstract:
An influential reinforcement learning framework proposes that behaviour is jointly governed by model-free (MF) and model-based (MB) controllers. Whereas the former learns the values of actions directly from past encounters, the latter uses a cognitive map of the task to calculate them prospectively. Considerable attention has been paid to how these systems interact during choice, however, how they interact in more general facets of decision-making such as learning and confidence is unclear. Using variants of a simple task that lays out limpidly various aspects of the interaction, we show how MB information resolves uncertainty and focuses MF learning, and discuss unexpected facets of their joint contribution to confidence.