日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Deep Reinforcement Learning of Marked Temporal Point Processes

MPS-Authors
/persons/resource/persons144813

Upadhyay,  Utkarsh
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons134152

De,  Abir
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons75510

Gomez Rodriguez,  Manuel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1805.09360.pdf
(プレプリント), 6MB

付随資料 (公開)
There is no public supplementary material available
引用

Upadhyay, U., De, A., & Gomez Rodriguez, M. (2018). Deep Reinforcement Learning of Marked Temporal Point Processes. Retrieved from http://arxiv.org/abs/1805.09360.


引用: https://hdl.handle.net/21.11116/0000-0003-4E2E-4
要旨
In a wide variety of applications, humans interact with a complex environment
by means of asynchronous stochastic discrete events in continuous time. Can we
design online interventions that will help humans achieve certain goals in such
asynchronous setting? In this paper, we address the above problem from the
perspective of deep reinforcement learning of marked temporal point processes,
where both the actions taken by an agent and the feedback it receives from the
environment are asynchronous stochastic discrete events characterized using
marked temporal point processes. In doing so, we define the agent's policy
using the intensity and mark distribution of the corresponding process and then
derive a flexible policy gradient method, which embeds the agent's actions and
the feedback it receives into real-valued vectors using deep recurrent neural
networks. Our method does not make any assumptions on the functional form of
the intensity and mark distribution of the feedback and it allows for
arbitrarily complex reward functions. We apply our methodology to two different
applications in personalized teaching and viral marketing and, using data
gathered from Duolingo and Twitter, we show that it may be able to find
interventions to help learners and marketers achieve their goals more
effectively than alternatives.