See Me, See You: A lightweight method for discriminating user touches on tabletop displays

Hong Zhang*, Xing Dong Yang, Barrett Ens, Hai Ning Liang, Pierre Boulanger, Pourang Irani

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

23 Citations (Scopus)


Tabletop systems provide a versatile space for collaboration, yet, in many cases, are limited by the inability to differentiate the interactions of simultaneous users. We present See Me, See You, a lightweight approach for discriminating user touches on a vision-based tabletop. We contribute a valuable characterization of finger orientation distributions of tabletop users. We exploit this biometric trait with a machine learning approach to allow the system to predict the correct position of users as they touch the surface. We achieve accuracies as high as 98% in simple situations and above 92% in more challenging conditions, such as two-handed tasks. We show high acceptance from users, who can self-correct prediction errors without significant costs. See Me, See You is a viable solution for providing simple yet effective support for multi-user application features on tabletops.

Original languageEnglish
Title of host publicationConference Proceedings - The 30th ACM Conference on Human Factors in Computing Systems, CHI 2012
Number of pages10
Publication statusPublished - 2012
Externally publishedYes
Event30th ACM Conference on Human Factors in Computing Systems, CHI 2012 - Austin, TX, United States
Duration: 5 May 201210 May 2012

Publication series

NameConference on Human Factors in Computing Systems - Proceedings


Conference30th ACM Conference on Human Factors in Computing Systems, CHI 2012
Country/TerritoryUnited States
CityAustin, TX


  • Multi-user application
  • Position aware system
  • Tabletop interaction
  • Touch discrimination

Cite this