« Back to Publications list

User-defined surface+motion gestures for 3d manipulation of objects at a distance through a mobile d

One form of input for interacting with large shared surfaces is through mobile devices. These personal devices provide interactive displays as well as numerous sensors to effectuate gestures for input. We examine the possibility of using surface and motion gestures on mobile devices for interacting with 3D objects on large surfaces. If effective use of such devices is possible over large displays, then users can collaborate and carry out complex 3D manipulation tasks, which are not trivial to do. In an attempt to generate design guidelines for this type of interaction, we conducted a guessability study with a dual-surface concept device, which provides users access to information through both its front and back. We elicited a set of end-user surface- and motion-based gestures. Based on our results, we demonstrate reasonably good agreement between gestures for choice of sensory (i.e. tilt), multi-touch and dual-surface input. In this paper we report the results of the guessability study and the design of the gesture-based interface for 3D manipulation.

https://doi.org/10.1145/2350046.2350098

Hai-Ning Liang, Cary Williams, Myron Semegen, Wolfgang Stuerzlinger, and Pourang Irani. 2012. User-defined surface+motion gestures for 3d manipulation of objects at a distance through a mobile device. In Proceedings of the 10th asia pacific conference on Computer human interaction (APCHI '12). Association for Computing Machinery, New York, NY, USA, 299–308. DOI:https://doi.org/10.1145/2350046.2350098

Bibtext Entry

@inproceedings{10.1145/2350046.2350098,
author = {Liang, Hai-Ning and Williams, Cary and Semegen, Myron and Stuerzlinger, Wolfgang and Irani, Pourang},
title = {User-Defined Surface+motion Gestures for 3d Manipulation of Objects at a Distance through a Mobile Device},
year = {2012},
isbn = {9781450314961},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2350046.2350098},
doi = {10.1145/2350046.2350098},
abstract = {One form of input for interacting with large shared surfaces is through mobile devices. These personal devices provide interactive displays as well as numerous sensors to effectuate gestures for input. We examine the possibility of using surface and motion gestures on mobile devices for interacting with 3D objects on large surfaces. If effective use of such devices is possible over large displays, then users can collaborate and carry out complex 3D manipulation tasks, which are not trivial to do. In an attempt to generate design guidelines for this type of interaction, we conducted a guessability study with a dual-surface concept device, which provides users access to information through both its front and back. We elicited a set of end-user surface- and motion-based gestures. Based on our results, we demonstrate reasonably good agreement between gestures for choice of sensory (i.e. tilt), multi-touch and dual-surface input. In this paper we report the results of the guessability study and the design of the gesture-based interface for 3D manipulation.},
booktitle = {Proceedings of the 10th Asia Pacific Conference on Computer Human Interaction},
pages = {299–308},
numpages = {10},
keywords = {mobile devices, collaboration interfaces, surface gestures, 3d visualizations, multi-display environments, interaction techniques, input devices, motion gestures},
location = {Matsue-city, Shimane, Japan},
series = {APCHI '12}
}

Authors

Pourang Irani

Pourang Irani

Professor
Canada Research Chair
at University of British Columbia Okanagan Campus