English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks

Jagadish, A., Coda-Forno, J., Thalmann, M., Schulz, E., & Binz, M. (2024). Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks. In 41st International Conference on Machine Learning (ICML 2024).

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/pdf/2402.01821 (Any fulltext)
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Jagadish, AK1, Author                 
Coda-Forno, J1, Author           
Thalmann, M1, Author                 
Schulz, E1, Author                 
Binz, M1, Author                 
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Content

show
hide
Free keywords: -
 Abstract: Ecological rationality refers to the notion that humans are rational agents adapted to their environment. However, testing this theory remains challenging due to two reasons: the difficulty in defining what tasks are ecologically valid and building rational models for these tasks. In this work, we demonstrate that large language models can generate cognitive tasks, specifically category learning tasks, that match the statistics of real-world tasks, thereby addressing the first challenge. We tackle the second challenge by deriving rational agents adapted to these tasks using the framework of meta-learning, leading to a class of models called ecologically rational meta-learned inference (ERMI). ERMI quantitatively explains human data better than seven other cognitive models in two different experiments. It additionally matches human behavior on a qualitative level: (1) it finds the same tasks difficult that humans find difficult, (2) it becomes more reliant on an exemplar-based strategy for assigning categories with learning, and (3) it generalizes to unseen stimuli in a human-like way. Furthermore, we show that ERMI’s ecologically valid priors allow it to achieve state-of-the-art performance on the OpenML-CC18 classification benchmark.

Details

show
hide
Language(s):
 Dates: 2024-07
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: 41st International Conference on Machine Learning (ICML 2024)
Place of Event: Wien, Austria
Start-/End Date: 2024-07-21 - 2024-07-27

Legal Case

show

Project information

show

Source 1

show
hide
Title: 41st International Conference on Machine Learning (ICML 2024)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -