English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning

Popat, K., Mukherjee, S., Yates, A., & Weikum, G. (2018). DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning. Retrieved from http://arxiv.org/abs/1809.06416.

Item is

Basic

show hide
Genre: Paper
Latex : {DeClarE}: {D}ebunking Fake News and False Claims using Evidence-Aware Deep Learning

Files

show Files
hide Files
:
arXiv:1809.06416.pdf (Preprint), 2MB
Name:
arXiv:1809.06416.pdf
Description:
File downloaded from arXiv at 2018-10-19 14:03 EMNLP 2018
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Popat, Kashyap1, Author           
Mukherjee, Subhabrata2, Author           
Yates, Andrew1, Author           
Weikum, Gerhard1, Author           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computation and Language, cs.CL,Computer Science, Learning, cs.LG
 Abstract: Misinformation such as fake news is one of the big challenges of our society.
Research on automated fact-checking has proposed methods based on supervised
learning, but these approaches do not consider external evidence apart from
labeled training instances. Recent approaches counter this deficit by
considering external sources related to a claim. However, these methods require
substantial feature modeling and rich lexicons. This paper overcomes these
limitations of prior work with an end-to-end model for evidence-aware
credibility assessment of arbitrary textual claims, without any human
intervention. It presents a neural network model that judiciously aggregates
signals from external evidence articles, the language of these articles and the
trustworthiness of their sources. It also derives informative features for
generating user-comprehensible explanations that makes the neural network
predictions transparent to the end-user. Experiments with four datasets and
ablation studies show the strength of our method.

Details

show
hide
Language(s): eng - English
 Dates: 2018-09-172018
 Publication Status: Published online
 Pages: 11 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1809.06416
URI: http://arxiv.org/abs/1809.06416
BibTex Citekey: Popat_arXiv1809.06416
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show