English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Conversational Question Answering on Heterogeneous Sources

MPS-Authors
/persons/resource/persons244397

Christmann,  Philipp
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons185343

Saha Roy,  Rishiraj
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2204.11677.pdf
(Preprint), 860KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Christmann, P., Saha Roy, R., & Weikum, G. (2022). Conversational Question Answering on Heterogeneous Sources. Retrieved from https://arxiv.org/abs/2204.11677.


Cite as: https://hdl.handle.net/21.11116/0000-000C-164E-5
Abstract
Conversational question answering (ConvQA) tackles sequential information
needs where contexts in follow-up questions are left implicit. Current ConvQA
systems operate over homogeneous sources of information: either a knowledge
base (KB), or a text corpus, or a collection of tables. This paper addresses
the novel issue of jointly tapping into all of these together, this way
boosting answer coverage and confidence. We present CONVINSE, an end-to-end
pipeline for ConvQA over heterogeneous sources, operating in three stages: i)
learning an explicit structured representation of an incoming question and its
conversational context, ii) harnessing this frame-like representation to
uniformly capture relevant evidences from KB, text, and tables, and iii)
running a fusion-in-decoder model to generate the answer. We construct and
release the first benchmark, ConvMix, for ConvQA over heterogeneous sources,
comprising 3000 real-user conversations with 16000 questions, along with entity
annotations, completed question utterances, and question paraphrases.
Experiments demonstrate the viability and advantages of our method, compared to
state-of-the-art baselines.