English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Annotation of sign and gesture cross-linguistically

MPS-Authors
/persons/resource/persons220

Zwitserlood,  Inge
Center for Language Studies, external;
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Language in Action , MPI for Psycholinguistics, Max Planck Society;
Neurobiology of Language Group, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons146

Perniss,  Pamela M.
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Language and Cognition Group, MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Zwitserlood_2008_annotation.pdf
(Publisher version), 105KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zwitserlood, I., Ozyurek, A., & Perniss, P. M. (2008). Annotation of sign and gesture cross-linguistically. In O. Crasborn, E. Efthimiou, T. Hanke, E. D. Thoutenhoofd, & I. Zwitserlood (Eds.), Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages (pp. 185-190). Paris: ELDA.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-1FC2-F
Abstract
This paper discusses the construction of a cross-linguistic, bimodal corpus containing three modes of expression: expressions from two sign languages, speech and gestural expressions in two spoken languages and pantomimic expressions by users of two spoken languages who are requested to convey information without speaking. We discuss some problems and tentative solutions for the annotation of utterances expressing spatial information about referents in these three modes, suggesting a set of comparable codes for the description of both sign and gesture. Furthermore, we discuss the processing of entered annotations in ELAN, e.g. relating descriptive annotations to analytic annotations in all three modes and performing relational searches across annotations on different tiers.