hide
Free keywords:
Computer Science, Computation and Language, cs.CL,Computer Science, Learning, cs.LG
Abstract:
Misinformation such as fake news is one of the big challenges of our society.
Research on automated fact-checking has proposed methods based on supervised
learning, but these approaches do not consider external evidence apart from
labeled training instances. Recent approaches counter this deficit by
considering external sources related to a claim. However, these methods require
substantial feature modeling and rich lexicons. This paper overcomes these
limitations of prior work with an end-to-end model for evidence-aware
credibility assessment of arbitrary textual claims, without any human
intervention. It presents a neural network model that judiciously aggregates
signals from external evidence articles, the language of these articles and the
trustworthiness of their sources. It also derives informative features for
generating user-comprehensible explanations that makes the neural network
predictions transparent to the end-user. Experiments with four datasets and
ablation studies show the strength of our method.