English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

UnCommonSense: Informative Negative Knowledge about Everyday Concepts

MPS-Authors
/persons/resource/persons244356

Arnaout,  Hiba
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons212613

Razniewski,  Simon
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2208.09292.pdf
(Preprint), 655KB

3511808.3557484.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Arnaout, H., Razniewski, S., Weikum, G., & Pan, J. Z. (2022). UnCommonSense: Informative Negative Knowledge about Everyday Concepts. In M. Al Hasan, & L. Xiong (Eds.), CIKM '22 (pp. 37-46). New York, NY: ACM. doi:10.1145/3511808.3557484.


Cite as: https://hdl.handle.net/21.11116/0000-000A-F224-C
Abstract
Commonsense knowledge about everyday concepts is an important asset for AI
applications, such as question answering and chatbots. Recently, we have seen
an increasing interest in the construction of structured commonsense knowledge
bases (CSKBs). An important part of human commonsense is about properties that
do not apply to concepts, yet existing CSKBs only store positive statements.
Moreover, since CSKBs operate under the open-world assumption, absent
statements are considered to have unknown truth rather than being invalid. This
paper presents the UNCOMMONSENSE framework for materializing informative
negative commonsense statements. Given a target concept, comparable concepts
are identified in the CSKB, for which a local closed-world assumption is
postulated. This way, positive statements about comparable concepts that are
absent for the target concept become seeds for negative statement candidates.
The large set of candidates is then scrutinized, pruned and ranked by
informativeness. Intrinsic and extrinsic evaluations show that our method
significantly outperforms the state-of-the-art. A large dataset of informative
negations is released as a resource for future research.