English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Application of audio and video processing methods for language research and documentation: The AVATecH Project

MPS-Authors
/persons/resource/persons4464

Lenkiewicz,  Przemyslaw
The Language Archive, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons39383

Drude,  Sebastian
The Language Archive, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons37858

Lenkiewicz,  Anna
The Language Archive, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons4454

Gebre,  Binyam Gebrekidan
The Language Archive, MPI for Psycholinguistics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)

Lenkiewicz_etal_2014.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Lenkiewicz, P., Drude, S., Lenkiewicz, A., Gebre, B. G., Masneri, S., Schreer, O., et al. (2014). Application of audio and video processing methods for language research and documentation: The AVATecH Project. In Z. Vetulani, & J. Mariani (Eds.), 5th Language and Technology Conference, LTC 2011, Poznań, Poland, November 25-27, 2011, Revised Selected Papers (pp. 288-299). Berlin: Springer.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0019-DB8B-6
Abstract
Evolution and changes of all modern languages is a wellknown fact. However, recently it is reaching dynamics never seen before, which results in loss of the vast amount of information encoded in every language. In order to preserve such rich heritage, and to carry out linguistic research, properly annotated recordings of world languages are necessary. Since creating those annotations is a very laborious task, reaching times 100 longer than the length of the annotated media, innovative video processing algorithms are needed, in order to improve the efficiency and quality of annotation process. This is the scope of the AVATecH project presented in this article