English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Extracting Multi-valued Relations from Language Models

MPS-Authors
/persons/resource/persons270904

Singhania,  Sneha
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons212613

Razniewski,  Simon
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2307.03122.pdf
(Preprint), 280KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Singhania, S., Razniewski, S., & Weikum, G. (2023). Extracting Multi-valued Relations from Language Models. Retrieved from https://arxiv.org/abs/2307.03122v2.


Cite as: https://hdl.handle.net/21.11116/0000-000D-938B-0
Abstract
The widespread usage of latent language representations via pre-trained
language models (LMs) suggests that they are a promising source of structured
knowledge. However, existing methods focus only on a single object per
subject-relation pair, even though often multiple objects are correct. To
overcome this limitation, we analyze these representations for their potential
to yield materialized multi-object relational knowledge. We formulate the
problem as a rank-then-select task. For ranking candidate objects, we evaluate
existing prompting techniques and propose new ones incorporating domain
knowledge. Among the selection methods, we find that choosing objects with a
likelihood above a learned relation-specific threshold gives a 49.5% F1 score.
Our results highlight the difficulty of employing LMs for the multi-valued
slot-filling task and pave the way for further research on extracting
relational knowledge from latent language representations.