English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Leveraging Self-Supervised Training for Unintentional Action Recognition

MPS-Authors
/persons/resource/persons275523

Duka,  Enea
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons244489

Kukleva,  Anna
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2209.11870.pdf
(Preprint), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Duka, E., Kukleva, A., & Schiele, B. (2022). Leveraging Self-Supervised Training for Unintentional Action Recognition. Retrieved from https://arxiv.org/abs/2209.11870.


Cite as: https://hdl.handle.net/21.11116/0000-000C-184F-2
Abstract
Unintentional actions are rare occurrences that are difficult to define
precisely and that are highly dependent on the temporal context of the action.
In this work, we explore such actions and seek to identify the points in videos
where the actions transition from intentional to unintentional. We propose a
multi-stage framework that exploits inherent biases such as motion speed,
motion direction, and order to recognize unintentional actions. To enhance
representations via self-supervised training for the task of unintentional
action recognition we propose temporal transformations, called Temporal
Transformations of Inherent Biases of Unintentional Actions (T2IBUA). The
multi-stage approach models the temporal information on both the level of
individual frames and full clips. These enhanced representations show strong
performance for unintentional action recognition tasks. We provide an extensive
ablation study of our framework and report results that significantly improve
over the state-of-the-art.