English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  In-Context Impersonation Reveals Large Language Models’ Strengths and Biases

Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E., & Akata, Z. (2023). In-Context Impersonation Reveals Large Language Models’ Strengths and Biases. In Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023).

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show
hide
Locator:
https://openreview.net/pdf?id=CbsJ53LdKc (Publisher version)
Description:
-
OA-Status:
Not specified
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Salewski, L, Author
Alaniz, S, Author
Rio-Torto, I, Author
Schulz, E1, Author           
Akata, Z, Author
Affiliations:
1Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3189356              

Content

show
hide
Free keywords: -
 Abstract: In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their strengths and hidden biases. Our code is available at https://github.com/ExplainableML/in-context-impersonation.

Details

show
hide
Language(s):
 Dates: 2023-11
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023)
Place of Event: New Orleans, LA, USA
Start-/End Date: 2023-12-10 - 2023-12-16

Legal Case

show

Project information

show

Source 1

show
hide
Title: Advances in Neural Information Processing Systems 36: 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -