日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

学術論文

Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data

MPS-Authors
/persons/resource/persons188997

Drijvers,  Linda
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Communication in Social Interaction, Radboud University Nijmegen, External Organizations;
Other Research, MPI for Psycholinguistics, Max Planck Society;
The Communicative Brain, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons4512

Holler,  Judith
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Communication in Social Interaction, Radboud University Nijmegen, External Organizations;
Other Research, MPI for Psycholinguistics, Max Planck Society;

External Resource

data and materials
(付録資料)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
付随資料 (公開)
There is no public supplementary material available
引用

Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.


引用: https://hdl.handle.net/21.11116/0000-0005-9498-8
要旨
In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.