Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  In-Context Impersonation Reveals Large Language Models’ Strengths and Biases

Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E., & Akata, Z. (2024). In-Context Impersonation Reveals Large Language Models’ Strengths and Biases. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.), Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023) (pp. 72044-72057). Red Hook, NY, USA: Curran.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://openreview.net/pdf?id=CbsJ53LdKc (Verlagsversion)
Beschreibung:
-
OA-Status:
Keine Angabe
Beschreibung:
-
OA-Status:
Keine Angabe

Urheber

einblenden:
ausblenden:
 Urheber:
Salewski, L, Autor
Alaniz, S, Autor
Rio-Torto, I, Autor
Schulz, E1, Autor           
Akata, Z, Autor
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their strengths and hidden biases. Our code is available at https://github.com/ExplainableML/in-context-impersonation.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2024-05
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
Veranstaltungsort: New Orleans, LA, USA
Start-/Enddatum: 2023-12-10 - 2023-12-16

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Genre der Quelle: Konferenzband
 Urheber:
Oh, A, Herausgeber
Naumann, T, Herausgeber
Globerson, A, Herausgeber
Saenko, K, Herausgeber
Hardt, M, Herausgeber
Levine, S, Herausgeber
Affiliations:
-
Ort, Verlag, Ausgabe: Red Hook, NY, USA : Curran
Seiten: - Band / Heft: - Artikelnummer: 3152 Start- / Endseite: 72044 - 72057 Identifikator: -