English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Large language models predict human sensory judgments across six modalities

Marjieh, R., Sucholutsky, I., van Rijn, P., Jacoby, N., & Griffiths, T. L. (2024). Large language models predict human sensory judgments across six modalities. Scientific Reports, 14: 21445. doi:10.1038/s41598-024-72071-1.

Item is

Files

show Files
hide Files
:
24-cap-jac-07-large.pdf (Publisher version), 2MB
Name:
24-cap-jac-07-large.pdf
Description:
OA
OA-Status:
Gold
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2024
Copyright Info:
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Locators

show

Creators

show
hide
 Creators:
Marjieh, Raja1, Author
Sucholutsky, Ilia1, Author
van Rijn, Pol2, Author                 
Jacoby, Nori3, 4, Author                 
Griffiths, Thomas L.1, Author
Affiliations:
1Department of Computer Science, Princeton University, Princeton, USA, ou_persistent22              
2Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society, ou_2421697              
3Research Group Computational Auditory Perception, Max Planck Institute for Empirical Aesthetics, Max Planck Society, ou_3024247              
4Department of Psychology, Cornell University, Ithaca, USA, ou_persistent22              

Content

show
hide
Free keywords: Human behaviour, Psychology
 Abstract: Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science. We show that state-of-the-art large language models can unlock new insights into this problem by providing a lower bound on the amount of perceptual information that can be extracted from language. Specifically, we elicit pairwise similarity judgments from GPT models across six psychophysical datasets. We show that the judgments are significantly correlated with human data across all domains, recovering well-known representations like the color wheel and pitch spiral. Surprisingly, we find that a model (GPT-4) co-trained on vision and language does not necessarily lead to improvements specific to the visual modality, and provides highly correlated predictions with human data irrespective of whether direct visual input is provided or purely textual descriptors. To study the impact of specific languages, we also apply the models to a multilingual color-naming task. We find that GPT-4 replicates cross-linguistic variation in English and Russian illuminating the interaction of language and perception.

Details

show
hide
Language(s): eng - English
 Dates: 2023-12-082024-09-032024-09-13
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1038/s41598-024-72071-1
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Scientific Reports
  Abbreviation : Sci. Rep.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: London, UK : Nature Publishing Group
Pages: - Volume / Issue: 14 Sequence Number: 21445 Start / End Page: - Identifier: ISSN: 2045-2322
CoNE: https://pure.mpg.de/cone/journals/resource/2045-2322