English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Evaluating the Knowledge Base Completion Potential of GPT

MPS-Authors
/persons/resource/persons288004

Veseli,  Blerta
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2310.14771.pdf
(Preprint), 263KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Veseli, B., Razniewski, S., Kalo, J.-C., & Weikum, G. (2023). Evaluating the Knowledge Base Completion Potential of GPT. Findings of EMNLP 2023. Retrieved from https://arxiv.org/abs/2310.14771.


Cite as: https://hdl.handle.net/21.11116/0000-000D-E9BA-B
Abstract
Structured knowledge bases (KBs) are an asset for search engines and other
applications, but are inevitably incomplete. Language models (LMs) have been
proposed for unsupervised knowledge base completion (KBC), yet, their ability
to do this at scale and with high accuracy remains an open question. Prior
experimental studies mostly fall short because they only evaluate on popular
subjects, or sample already existing facts from KBs. In this work, we perform a
careful evaluation of GPT's potential to complete the largest public KB:
Wikidata. We find that, despite their size and capabilities, models like GPT-3,
ChatGPT and GPT-4 do not achieve fully convincing results on this task.
Nonetheless, they provide solid improvements over earlier approaches with
smaller LMs. In particular, we show that, with proper thresholding, GPT-3
enables to extend Wikidata by 27M facts at 90% precision.