English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Controlling video stimuli in sign language and gesture research: The OpenPoseR package for analyzing OpenPose motion tracking data in R

Trettenbrein, P., & Zaccarella, E. (2020). Controlling video stimuli in sign language and gesture research: The OpenPoseR package for analyzing OpenPose motion tracking data in R. PsyArXiv. doi:10.31234/osf.io/pnqxa.

Item is

Files

show Files
hide Files
:
OpenPoseR.pdf (Preprint), 2MB
 
File Permalink:
-
Name:
OpenPoseR.pdf
Description:
Identical to first version of preprint: https://doi.org/10.31234/osf.io/pnqxa
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
2020
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Trettenbrein, Patrick1, 2, Author           
Zaccarella, Emiliano1, Author           
Affiliations:
1Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634551              
2International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, MPI for Human Cognitive and Brain Sciences, Max Planck Society, Leipzig, DE, ou_2616696              

Content

show
hide
Free keywords: R, linguistics, psychology, neuroscience, sign language, gesture, video stimuli, motion tracking, OpenPose, stimulus control
 Abstract: Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.

Details

show
hide
Language(s):
 Dates: 2020-11-122020-11-12
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.31234/osf.io/pnqxa
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: PsyArXiv
Source Genre: Web Page
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -