English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Listening between the Lines: Learning Personal Attributes from Conversations

MPS-Authors
/persons/resource/persons230702

Tigunova,  Anna
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons206666

Yates,  Andrew
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons201381

Mirza,  Paramita
Databases and Information Systems, MPI for Informatics, Max Planck Society;

/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1904.10887.pdf
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tigunova, A., Yates, A., Mirza, P., & Weikum, G. (2019). Listening between the Lines: Learning Personal Attributes from Conversations. Retrieved from http://arxiv.org/abs/1904.10887.


Cite as: https://hdl.handle.net/21.11116/0000-0003-FE7F-2
Abstract
Open-domain dialogue agents must be able to converse about many topics while
incorporating knowledge about the user into the conversation. In this work we
address the acquisition of such knowledge, for personalization in downstream
Web applications, by extracting personal attributes from conversations. This
problem is more challenging than the established task of information extraction
from scientific publications or Wikipedia articles, because dialogues often
give merely implicit cues about the speaker. We propose methods for inferring
personal attributes, such as profession, age or family status, from
conversations using deep learning. Specifically, we propose several Hidden
Attribute Models, which are neural networks leveraging attention mechanisms and
embeddings. Our methods are trained on a per-predicate basis to output rankings
of object values for a given subject-predicate combination (e.g., ranking the
doctor and nurse professions high when speakers talk about patients, emergency
rooms, etc). Experiments with various conversational texts including Reddit
discussions, movie scripts and a collection of crowdsourced personal dialogues
demonstrate the viability of our methods and their superior performance
compared to state-of-the-art baselines.