hide
Free keywords:
Computer Science, Information Retrieval, cs.IR
Abstract:
Direct answering of questions that involve multiple entities and relations is
a challenge for text-based QA. This problem is most pronounced when answers can
be found only by joining evidence from multiple documents. Curated knowledge
graphs (KGs) may yield good answers, but are limited by their inherent
incompleteness and potential staleness. This paper presents QUEST, a method
that can answer complex questions directly from textual sources on-the-fly, by
computing similarity joins over partial results from different documents. Our
method is completely unsupervised, avoiding training-data bottlenecks and being
able to cope with rapidly evolving ad hoc topics and formulation style in user
questions. QUEST builds a noisy quasi KG with node and edge weights, consisting
of dynamically retrieved entity names and relational phrases. It augments this
graph with types and semantic alignments, and computes the best answers by an
algorithm for Group Steiner Trees. We evaluate QUEST on benchmarks of complex
questions, and show that it substantially outperforms state-of-the-art
baselines.