English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

CROWN: Conversational Passage Ranking by Reasoning over Word Networks

MPS-Authors
/persons/resource/persons244401

Kaiser,  Magdalena
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons185343

Saha Roy,  Rishiraj
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

arXiv:1911.02850.pdf
(Preprint), 522KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Kaiser, M., Saha Roy, R., & Weikum, G. (2019). CROWN: Conversational Passage Ranking by Reasoning over Word Networks. Retrieved from http://arxiv.org/abs/1911.02850.


Cite as: http://hdl.handle.net/21.11116/0000-0005-83ED-C
Abstract
Information needs around a topic cannot be satisfied in a single turn; users typically ask follow-up questions referring to the same theme and a system must be capable of understanding the conversational context of a request to retrieve correct answers. In this paper, we present our submission to the TREC Conversational Assistance Track 2019, in which such a conversational setting is explored. We propose a simple unsupervised method for conversational passage ranking by formulating the passage score for a query as a combination of similarity and coherence. To be specific, passages are preferred that contain words semantically similar to the words used in the question, and where such words appear close by. We built a word-proximity network (WPN) from a large corpus, where words are nodes and there is an edge between two nodes if they co-occur in the same passages in a statistically significant way, within a context window. Our approach, named CROWN, improved nDCG scores over a provided Indri baseline on the CAsT training data. On the evaluation data for CAsT, our best run submission achieved above-average performance with respect to AP@5 and nDCG@1000.