Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Meta-in-context learning in large language models

Coda-Forno, J., Binz, M., Akata, Z., Botvinick, M., Wang, J., & Schulz, E. (2024). Meta-in-context learning in large language models. In A. Ho, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.), Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023) (pp. 65189-65201). Red Hook, NY, USA: Curran.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://openreview.net/pdf?id=sx0xpaO0za (beliebiger Volltext)
Beschreibung:
-
OA-Status:
Keine Angabe

Urheber

einblenden:
ausblenden:
 Urheber:
Coda-Forno, J1, Autor           
Binz, M1, Autor                 
Akata, Z, Autor
Botvinick, M, Autor
Wang, JX, Autor
Schulz, E1, Autor           
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Large language models have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their success. In the present paper, we demonstrate that the in-context learning abilities of large language models can be recursively improved via in-context learning itself. We coin this phenomenon meta-in-context learning. Looking at two idealized domains, a one-dimensional regression task and a two-armed bandit task, we show that meta-in-context learning adaptively reshapes a large language model's priors over expected tasks. Furthermore, we find that meta-in-context learning modifies the in-context learning strategies of such models. Finally, we broaden the scope of our investigation to encompass two diverse benchmarks: one focusing on real-world regression problems and the other encompassing multiple NLP tasks. In both cases, we observe competitive performance comparable to that of traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting large language models to the environment they are applied purely through meta-in-context learning rather than traditional finetuning.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2024-05
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
Veranstaltungsort: New Orleans, LA, USA
Start-/Enddatum: 2023-12-10 - 2023-12-16

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Genre der Quelle: Konferenzband
 Urheber:
Ho, A, Herausgeber
Naumann, T, Herausgeber
Globerson, A, Herausgeber
Saenko, K, Herausgeber
Hardt, M, Herausgeber
Levine, S, Herausgeber
Affiliations:
-
Ort, Verlag, Ausgabe: Red Hook, NY, USA : Curran
Seiten: - Band / Heft: - Artikelnummer: 2844 Start- / Endseite: 65189 - 65201 Identifikator: -