Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Joint Reasoning for Multi-Faceted Commonsense Knowledge


Razniewski,  Simon
Databases and Information Systems, MPI for Informatics, Max Planck Society;


Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Preprint), 1005KB

Supplementary Material (public)
There is no public supplementary material available

Chalier, Y., Razniewski, S., & Weikum, G. (2020). Joint Reasoning for Multi-Faceted Commonsense Knowledge. Retrieved from http://arxiv.org/abs/2001.04170.

Cite as: https://hdl.handle.net/21.11116/0000-0005-8226-D
Commonsense knowledge (CSK) supports a variety of AI applications, from
visual understanding to chatbots. Prior works on acquiring CSK, such as
ConceptNet, have compiled statements that associate concepts, like everyday
objects or activities, with properties that hold for most or some instances of
the concept. Each concept is treated in isolation from other concepts, and the
only quantitative measure (or ranking) of properties is a confidence score that
the statement is valid. This paper aims to overcome these limitations by
introducing a multi-faceted model of CSK statements and methods for joint
reasoning over sets of inter-related statements. Our model captures four
different dimensions of CSK statements: plausibility, typicality, remarkability
and salience, with scoring and ranking along each dimension. For example,
hyenas drinking water is typical but not salient, whereas hyenas eating
carcasses is salient. For reasoning and ranking, we develop a method with soft
constraints, to couple the inference over concepts that are related in in a
taxonomic hierarchy. The reasoning is cast into an integer linear programming
(ILP), and we leverage the theory of reduction costs of a relaxed LP to compute
informative rankings. This methodology is applied to several large CSK
collections. Our evaluation shows that we can consolidate these inputs into
much cleaner and more expressive knowledge. Results are available at