Hostname: page-component-89b8bd64d-z2ts4 Total loading time: 0 Render date: 2026-05-06T10:13:49.190Z Has data issue: false hasContentIssue false

Gesture-based system for next generation natural and intuitive interfaces

Published online by Cambridge University Press:  30 May 2018

Jinmiao Huang
Affiliation:
ABB Inc. Bloomfield, Connecticut, USA
Prakhar Jaiswal
Affiliation:
MADLab, University at Buffalo – SUNY, Buffalo, New York, USA
Rahul Rai*
Affiliation:
MADLab, University at Buffalo – SUNY, Buffalo, New York, USA
*
Author for correspondence: Rahul Rai, E-mail: rahulrai@buffalo.edu
Rights & Permissions [Opens in a new window]

Abstract

We present a novel and trainable gesture-based system for next-generation intelligent interfaces. The system requires a non-contact depth sensing device such as an RGB-D (color and depth) camera for user input. The camera records the user's static hand pose and palm center dynamic motion trajectory. Both static pose and dynamic trajectory are used independently to provide commands to the interface. The sketches/symbols formed by palm center trajectory is recognized by the Support Vector Machine classifier. Sketch/symbol recognition process is based on a set of geometrical and statistical features. Static hand pose recognizer is incorporated to expand the functionalities of our system. Static hand pose recognizer is used in conjunction with sketch classification algorithm to develop a robust and effective system for natural and intuitive interaction. To evaluate the performance of the system user studies were performed on multiple participants. The efficacy of the presented system is demonstrated using multiple interfaces developed for different tasks including computer-aided design modeling.

Information

Type
Research Article
Copyright
Copyright © Cambridge University Press 2018 
Figure 0

Fig. 1. Flowchart showing both learning and operation phases of the developed system.

Figure 1

Fig. 2. System setup.

Figure 2

Fig. 3. (a) Raw 3D input sketch, (b) Refined sketch, and (c) Preprocessed sketch after resampling.

Figure 3

Fig. 4. Color plot of the distance matrix and average distance matrix.

Figure 4

Fig. 5. (a) and (b) indicates the difference in length of forearm section in two poses (l1 <l2), and (c) illustrates the method used to segment off the forearm section.

Figure 5

Fig. 6. (a) Original position and orientation of captured hand shape, (b) Position and orientation of hand shape after transformation.

Figure 6

Fig. 7. (a) A sample hand pose, (b) and (c) shows different alignment results for the sample hand pose with the same template.

Figure 7

Fig. 8. Point-to-Point error and Point-to-Plane error between two surfaces.

Figure 8

Fig. 9. Symbols used in the experiments - Domain 1: Arabic numerals, Domain 2: English alphabets, Domain 3: Physics simulation symbols, and Domain 4: Procedural CAD symbols (Note that this figure shows 2D projection of 3D sketches).

Figure 9

Table 1. Database for user study

Figure 10

Fig. 10. Variation of explained variance ratio (EVR) and its cumulative with the principal components in the decreasing order of corresponding eigenvalues.

Figure 11

Fig. 11. Variation in cross-validation accuracy for different values of SVM model parameters C and γ.

Figure 12

Fig. 12. Intra-class variations in drawing habits of users.

Figure 13

Table 2. Classification accuracy of SVM model with optimum parameter values

Figure 14

Fig. 13. Hand pose database.

Figure 15

Fig. 14. Interfaces developed to demonstrate the efficacy of our system: (a) Text/symbol recognition interface, (b) 3D CAD modeling and manipulation interface, and (c) video player interface.

Figure 16

Fig. 15. Scaled product family of various 3D models created using hand gesture-based CAD interface.