English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Conversational Question Answering on Heterogeneous Sources

Christmann, P., Saha Roy, R., & Weikum, G. (2022). Conversational Question Answering on Heterogeneous Sources. Retrieved from https://arxiv.org/abs/2204.11677.

Item is

Files

show Files
hide Files
:
arXiv:2204.11677.pdf (Preprint), 860KB
Name:
arXiv:2204.11677.pdf
Description:
File downloaded from arXiv at 2022-12-28 12:17 SIGIR 2022 Research Track Long Paper
OA-Status:
Not specified
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Christmann, Philipp1, Author           
Saha Roy, Rishiraj1, Author           
Weikum, Gerhard1, Author           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Content

show
hide
Free keywords: Computer Science, Information Retrieval, cs.IR,Computer Science, Computation and Language, cs.CL
 Abstract: Conversational question answering (ConvQA) tackles sequential information
needs where contexts in follow-up questions are left implicit. Current ConvQA
systems operate over homogeneous sources of information: either a knowledge
base (KB), or a text corpus, or a collection of tables. This paper addresses
the novel issue of jointly tapping into all of these together, this way
boosting answer coverage and confidence. We present CONVINSE, an end-to-end
pipeline for ConvQA over heterogeneous sources, operating in three stages: i)
learning an explicit structured representation of an incoming question and its
conversational context, ii) harnessing this frame-like representation to
uniformly capture relevant evidences from KB, text, and tables, and iii)
running a fusion-in-decoder model to generate the answer. We construct and
release the first benchmark, ConvMix, for ConvQA over heterogeneous sources,
comprising 3000 real-user conversations with 16000 questions, along with entity
annotations, completed question utterances, and question paraphrases.
Experiments demonstrate the viability and advantages of our method, compared to
state-of-the-art baselines.

Details

show
hide
Language(s): eng - English
 Dates: 2022-04-252022
 Publication Status: Published online
 Pages: 12 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2204.11677
URI: https://arxiv.org/abs/2204.11677
BibTex Citekey: Christmann2204.11677
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show