English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Tutorial 2: Meta-Learned Models of Cognition

MPS-Authors
/persons/resource/persons256660

Binz,  M
Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Binz, M. (Ed.). (2022). Tutorial 2: Meta-Learned Models of Cognition. Talk presented at 15th Biannual Conference of the German Society for Cognitive Science (KogWis 2022). Freiburg (Breisgau), Germany. 2022-09-05 - 2022-09-07.


Cite as: https://hdl.handle.net/21.11116/0000-000A-EAE9-8
Abstract
Research in cognitive psychology and neuroscience relies on computational models to study, analyze and understand human learning. Traditionally, such computational models have been handdesigned by expert researchers. In a cognitive architecture, for example, researchers provide a fixed set of structures and a definition of how these structures interact with each other. In a Bayesian model, researchers instead specify a prior and a likelihood function, which in combination with Bayes’ rule, fully determine the model's behavior. The framework of meta-learning offers a radically different approach for constructing computational models of learning. In this framework, learning algorithms are acquired – i.e., they are themselves learned – through repeated interactions with an environment instead of being a priori defined by a researcher. Recently, psychologists have started to apply meta-learning in order to study human learning. In this context, it has been demonstrated that meta-learned models can capture a wide range of empirically observed phenomena that could not be explained otherwise. They, amongst others, reproduce human biases in probabilistic reasoning [1], discover heuristic decision-making strategies used by people [2], and generalize compositionally on complex language tasks in a human-like manner [3]. The goal of this tutorial is to introduce the general ideas behind meta-learning, highlight its close connections to the Bayesian inference, and discuss its advantages and disadvantages relative to other modeling frameworks. We will furthermore cover how to implement these models using the PyTorch framework.