English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Robust Gaze Features for Enabling Language Proficiency Awareness

MPS-Authors
/persons/resource/persons83861

Chuang,  LL
Project group: Cognition & Control in Human-Machine Systems, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Publisher version)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Karolus, J., Wozniak, P., Chuang, L., & Schmidt, A. (2017). Robust Gaze Features for Enabling Language Proficiency Awareness. In G. Mark, & S. Fussell (Eds.), 2017 CHI Conference on Human Factors in Computing Systems (pp. 2998-3010). New York, NY, USA: ACM Press.


Cite as: https://hdl.handle.net/21.11116/0000-0000-C3B3-9
Abstract
We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data.