Hostname: page-component-77f85d65b8-8v9h9 Total loading time: 0 Render date: 2026-04-18T15:58:27.087Z Has data issue: false hasContentIssue false

Modeling intent and destination prediction within a Bayesian framework: Predictive touch as a usecase

Published online by Cambridge University Press:  27 October 2020

Runze Gan*
Affiliation:
Engineering Department, University of Cambridge, Cambridge, United Kingdom Equal Contribution
Jiaming Liang
Affiliation:
Engineering Department, University of Cambridge, Cambridge, United Kingdom Equal Contribution
Bashar I. Ahmad
Affiliation:
Engineering Department, University of Cambridge, Cambridge, United Kingdom
Simon Godsill
Affiliation:
Engineering Department, University of Cambridge, Cambridge, United Kingdom
*
*Corresponding author. E-mail: rg605@cam.ac.uk

Abstract

In various scenarios, the motion of a tracked object, for example, a pointing apparatus, pedestrian, animal, vehicle, and others, is driven by achieving a premeditated goal such as reaching a destination. This is albeit the various possible trajectories to this endpoint. This paper presents a generic Bayesian framework that utilizes stochastic models that can capture the influence of intent (viz., destination) on the object behavior. It leads to simple algorithms to infer, as early as possible, the intended endpoint from noisy sensory observations, with relatively low computational and training data requirements. This framework is introduced in the context of the novel predictive touch technology for intelligent user interfaces and touchless interactions. It can determine, early in the interaction task or pointing gesture, the interface item the user intends to select on the display (e.g., touchscreen) and accordingly simplify as well as expedite the selection task. This is shown to significantly improve the usability of displays in vehicles, especially under the influence of perturbations due to road and driving conditions, and enable intuitive contact-free interactions. Data collected in instrumented vehicles are shown to demonstrate the effectiveness of the proposed intent prediction approach.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is included and the original work is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
© The Author(s), 2020. Published by Cambridge University Press
Figure 0

Figure 1. Block diagram of an in-vehicle predictive touch system. The dotted line is a recorded full in-car pointing trajectory.The gesture tracker (sensor is facing downwards to increase the region of coverage and minimize occlusions) provides at time $ {t}_n $ the pointing finger/hand Cartesian coordinates along the x, y, and z axes, denoted by $ {\mathbf{y}}_n $.

Figure 1

Figure 2. The 3D norm velocity profile generated by the equilibrium reverting acceleration (ERA) model is shown in (a), where the black lines are 100 random realizations, the red line is the mean of them, and the blue line shows the deterministic transition of the norm velocity of the same ERA model. (b) Shows the velocity profile from 95 real pointing data, where the red line is the mean trajectory.

Figure 2

Figure 3. 1D (along $ x $-axis in a Cartesian coordinate) distributions of a pseudo-observation based bridging distributions-constant acceleration process (with $ \tilde{G}=\mathbf{I} $, $ {\varSigma}_i $ = $ \mathbf{0} $); in this case, the distribution at the endpoint (asterisk) reduces to $ p\left({\mathbf{x}}_N|{D}_i,\mathcal{T}\right)={\delta}_{{\mathbf{a}}_i}\left({\mathbf{x}}_N\right) $ with $ {\delta}_{\left(\cdot \right)} $ being the Dirac delta function. From left to right: (a). $ p\left({x}_n|{\mathcal{D}}_i,\mathcal{T}\right) $; (b). $ p\left({\dot{x}}_n|{\mathcal{D}}_i,\mathcal{T}\right) $; (c). $ p\left({\ddot{x}}_n|{\mathcal{D}}_i,\mathcal{T}\right) $; and (d). velocity norm. Horizontal axes are time (in percentage) and dashed lines are distribution means.

Figure 3

Figure 4. Example trajectories of collected real pointing trajectories.

Figure 4

Table 1. Linear time invariant (LTI) Gaussian models parameters and overall prediction performance for 95 tracks.

Figure 5

Figure 5. Average successful prediction over time (Dataset A).

Figure 6

Figure 6. Average success rate for Dataset B.

Figure 7

Figure 7. Average successful prediction over time (Dataset B).

Figure 8

Table 2. Jump models parameter sets.

Submit a response

Comments

No Comments have been published for this article.