English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  CROWN: Conversational Passage Ranking by Reasoning over Word Networks

Kaiser, M., Saha Roy, R., & Weikum, G. (2019). CROWN: Conversational Passage Ranking by Reasoning over Word Networks. Retrieved from http://arxiv.org/abs/1911.02850.

Item is

Files

show Files
hide Files
:
arXiv:1911.02850.pdf (Preprint), 522KB
Name:
arXiv:1911.02850.pdf
Description:
File downloaded from arXiv at 2020-01-21 10:52
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Kaiser, Magdalena1, Author           
Saha Roy, Rishiraj1, Author           
Weikum, Gerhard1, Author           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Content

show
hide
Free keywords: Computer Science, Information Retrieval, cs.IR,Computer Science, Computation and Language, cs.CL
 Abstract: Information needs around a topic cannot be satisfied in a single turn; users
typically ask follow-up questions referring to the same theme and a system must
be capable of understanding the conversational context of a request to retrieve
correct answers. In this paper, we present our submission to the TREC
Conversational Assistance Track 2019, in which such a conversational setting is
explored. We propose a simple unsupervised method for conversational passage
ranking by formulating the passage score for a query as a combination of
similarity and coherence. To be specific, passages are preferred that contain
words semantically similar to the words used in the question, and where such
words appear close by. We built a word-proximity network (WPN) from a large
corpus, where words are nodes and there is an edge between two nodes if they
co-occur in the same passages in a statistically significant way, within a
context window. Our approach, named CROWN, improved nDCG scores over a provided
Indri baseline on the CAsT training data. On the evaluation data for CAsT, our
best run submission achieved above-average performance with respect to AP@5 and
nDCG@1000.

Details

show
hide
Language(s): eng - English
 Dates: 2019-11-072019-11-112019
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1911.02850
URI: http://arxiv.org/abs/1911.02850
BibTex Citekey: Kaiser_arXiv1911.02850
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show