日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze & Touch

MPS-Authors
/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Turner, J., Alexander, J., Bulling, A., Dominik, S., & Gellersen, H. (2013). Eye Pull, Eye Push: Moving Objects between Large Screens and Personal Devices with Gaze & Touch. In P., Kotzé, G., Marsden, G., Lindgaard, J., Wesson, & M., Winckler (Eds.), Human-Computer Interaction – INTERACT 2013 (pp. 170-186). Berlin: Springer.


引用: https://hdl.handle.net/11858/00-001M-0000-0017-AF85-8
要旨
Previous work has validated the eyes and mobile input as a viable approach for pointing at, and selecting out of reach objects. This work presents Eye Pull, Eye Push, a novel interaction concept for content transfer between public and personal devices using gaze and touch. We present three techniques that enable this interaction: Eye Cut & Paste, Eye Drag & Drop, and Eye Summon & Cast. We outline and discuss several scenarios in which these techniques can be used. In a user study we found that participants responded well to the visual feedback provided by Eye Drag & Drop during object movement. In contrast, we found that although Eye Summon & Cast significantly improved performance, participants had difficulty coordinating their hands and eyes during interaction.