English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Deep Reinforcement Learning of Marked Temporal Point Processes

Upadhyay, U., De, A., & Gomez Rodriguez, M. (2018). Deep Reinforcement Learning of Marked Temporal Point Processes. Retrieved from http://arxiv.org/abs/1805.09360.

Item is

Files

show Files
hide Files
:
arXiv:1805.09360.pdf (Preprint), 6MB
Name:
arXiv:1805.09360.pdf
Description:
File downloaded from arXiv at 2019-04-03 13:04
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Upadhyay, Utkarsh1, Author           
De, Abir1, Author           
Gomez Rodriguez, Manuel1, Author           
Affiliations:
1Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society, ou_2105290              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,cs.SI,Statistics, Machine Learning, stat.ML
 Abstract: In a wide variety of applications, humans interact with a complex environment
by means of asynchronous stochastic discrete events in continuous time. Can we
design online interventions that will help humans achieve certain goals in such
asynchronous setting? In this paper, we address the above problem from the
perspective of deep reinforcement learning of marked temporal point processes,
where both the actions taken by an agent and the feedback it receives from the
environment are asynchronous stochastic discrete events characterized using
marked temporal point processes. In doing so, we define the agent's policy
using the intensity and mark distribution of the corresponding process and then
derive a flexible policy gradient method, which embeds the agent's actions and
the feedback it receives into real-valued vectors using deep recurrent neural
networks. Our method does not make any assumptions on the functional form of
the intensity and mark distribution of the feedback and it allows for
arbitrarily complex reward functions. We apply our methodology to two different
applications in personalized teaching and viral marketing and, using data
gathered from Duolingo and Twitter, we show that it may be able to find
interventions to help learners and marketers achieve their goals more
effectively than alternatives.

Details

show
hide
Language(s): eng - English
 Dates: 2018-05-232018-11-062018
 Publication Status: Published online
 Pages: 20 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1805.09360
URI: http://arxiv.org/abs/1805.09360
BibTex Citekey: Upadhyay_arXiv1805.09360
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show