Bootstrapper: Recognizing Tabletop Users by Their Shoes

Stephan Richter, Christian Holz and Patrick Baudisch. CHI 2012.
Hasso Plattner Institute, Potsdam, Germany.

Figure 1: Bootstrapper—A Kinect camera mounted to a Microsoft Surface table

Bootstrapper: Figure 1

Bootstrapper recognizes users interacting with the table by observing their shoes using a depth and an RGB camera. Here we use a Kinect camera to extract users' shoes from the depth image, retrieve their textures from the color image, and match them against samples in the database to finally identify users.


In order to enable personalized functionality, such as to log tabletop activity by user, tabletop systems need to recognize users. DiamondTouch does so reliably, but requires users to stay in assigned seats and cannot recognize users across sessions. We propose a different approach based on distinguishing users’ shoes. While users are interacting with the table, our system Bootstrapper observes their shoes using one or more depth cameras mounted to the edge of the table. It then identifies users by matching camera images with a database of known shoe images. When multiple users interact, Bootstrapper associates touches with shoes based on hand orientation. The approach can be implemented using consumer depth cameras because (1) shoes offer large distinct features such as color, (2) shoes naturally align themselves with the ground, giving the system a well-defined perspective and thus reduced ambiguity. We report two simple studies in which Bootstrapper recognized participants from a database of 18 users with 95.8% accuracy.



Stephan Richter, Christian Holz, Patrick Baudisch. 2012. Bootstrapper: Recognizing Tabletop Users by their Shoes. In Proceedings of the 2012 annual conference on Human factors in computing systems (CHI '12). Austin, TX, USA (May 5-10, 2012). ACM, New York, NY, USA, 1249–1252.