English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Multimodal annotations in gesture and sign language studies

MPS-Authors
/persons/resource/persons19

Brugman,  Hennie
Technical Group, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons216

Wittenburg,  Peter
Technical Group, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons116

Levinson,  Stephen C.
Language and Cognition Group, MPI for Psycholinguistics, Max Planck Society;
Technical Group, MPI for Psycholinguistics, Max Planck Society;

Kita,  Sotaro
Language and Cognition Group, MPI for Psycholinguistics, Max Planck Society;
Technical Group, MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

wittenburg_2002_multimodal.pdf
(Publisher version), 584KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Brugman, H., Wittenburg, P., Levinson, S. C., & Kita, S. (2002). Multimodal annotations in gesture and sign language studies. In M. Rodriguez González, & C. Paz Suárez Araujo (Eds.), Third international conference on language resources and evaluation (pp. 176-182). Paris: European Language Resources Association.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-1D0E-C
Abstract
For multimodal annotations an exhaustive encoding system for gestures was developed to facilitate research. The structural requirements of multimodal annotations were analyzed to develop an Abstract Corpus Model which is the basis for a powerful annotation and exploitation tool for multimedia recordings and the definition of the XML-based EUDICO Annotation Format. Finally, a metadata-based data management environment has been setup to facilitate resource discovery and especially corpus management. Bt means of an appropriate digitization policy and their online availability researchers have been able to build up a large corpus covering gesture and sign language data.