Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning

MPG-Autoren
/persons/resource/persons185337

Popat,  Kashyap
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons206666

Yates,  Andrew
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1809.06416.pdf
(Preprint), 2MB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Popat, K., Mukherjee, S., Yates, A., & Weikum, G. (2018). DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning. Retrieved from http://arxiv.org/abs/1809.06416.


Zitierlink: https://hdl.handle.net/21.11116/0000-0002-5EE1-7
Zusammenfassung
Misinformation such as fake news is one of the big challenges of our society.
Research on automated fact-checking has proposed methods based on supervised
learning, but these approaches do not consider external evidence apart from
labeled training instances. Recent approaches counter this deficit by
considering external sources related to a claim. However, these methods require
substantial feature modeling and rich lexicons. This paper overcomes these
limitations of prior work with an end-to-end model for evidence-aware
credibility assessment of arbitrary textual claims, without any human
intervention. It presents a neural network model that judiciously aggregates
signals from external evidence articles, the language of these articles and the
trustworthiness of their sources. It also derives informative features for
generating user-comprehensible explanations that makes the neural network
predictions transparent to the end-user. Experiments with four datasets and
ablation studies show the strength of our method.