English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models

Demircan, C., Saanum, T., Jagadish, A., Binz, M., & Schulz, E. (submitted). Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models.

Item is

Files

show Files

Locators

show
hide
Locator:
https://arxiv.org/pdf/2410.01280 (Any fulltext)
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Demircan, C, Author           
Saanum, T1, Author           
Jagadish, AK2, Author                 
Binz, M, Author                 
Schulz, E, Author                 
Affiliations:
1Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_3017468              
2Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Content

show
hide
Free keywords: -
 Abstract: In-context learning, the ability to adapt based on a few examples in the input prompt, is a ubiquitous feature of large language models (LLMs). However, as LLMs' in-context learning abilities continue to improve, understanding this phenomenon mechanistically becomes increasingly important. In particular, it is not well-understood how LLMs learn to solve specific classes of problems, such as reinforcement learning (RL) problems, in-context. Through three different tasks, we first show that Llama 3 70B can solve simple RL problems in-context. We then analyze the residual stream of Llama using Sparse Autoencoders (SAEs) and find representations that closely match temporal difference (TD) errors. Notably, these representations emerge despite the model only being trained to predict the next token. We verify that these representations are indeed causally involved in the computation of TD errors and Q-values by performing carefully designed interventions on them. Taken together, our work establishes a methodology for studying and manipulating in-context learning with SAEs, paving the way for a more mechanistic understanding.

Details

show
hide
Language(s):
 Dates: 2024-10
 Publication Status: Submitted
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.48550/arXiv.2410.01280
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show