Hostname: page-component-89b8bd64d-sd5qd Total loading time: 0 Render date: 2026-05-09T19:43:41.777Z Has data issue: false hasContentIssue false

Incorporating control inputs in continuous-time Gaussian process state estimation for robotics

Published online by Cambridge University Press:  05 February 2025

Sven Lilge*
Affiliation:
University of Toronto Robotics Institute, University of Toronto, Toronto, Ontario, Canada
Timothy D. Barfoot
Affiliation:
University of Toronto Robotics Institute, University of Toronto, Toronto, Ontario, Canada
*
Corresponding author: Sven Lilge; Email: sven.lilge@utoronto.ca
Rights & Permissions [Opens in a new window]

Abstract

Continuous-time batch state estimation using Gaussian processes is an efficient approach to estimate the trajectories of robots over time. In the past, relatively simple physics-motivated priors have been considered for such approaches, using assumptions such as constant velocity or acceleration. This paper presents an approach to incorporating exogenous control inputs, such as velocity or acceleration commands, into the continuous Gaussian process state estimation framework. It is shown that this approach generalizes across different domains in robotics, making it applicable to both the estimation of continuous-time trajectories for mobile robots and the estimation of quasi-static continuum-robot shapes. Results show that incorporating control inputs leads to more informed priors, potentially requiring less measurements and estimation nodes to obtain accurate estimates. This makes the approach particularly useful in situations in which limited sensing is available. For example, in a mobile robot localization experiment with sparse landmark distance measurements and frequent odometry control inputs, our approach provides accurate trajectory estimates with root-mean-square errors around 3-4 cm and 4-5 degrees, even with time intervals up to five seconds between discrete estimation nodes, which significantly reduces computation time.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (https://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Our proposed method incorporates known control inputs into a continuous Gaussian process prior formulation, which is fused with noisy state observations. It is applicable to the estimation of both mobile robot continuous-time trajectories and continuum-robot shapes.

Figure 1

Figure 2. Top: Definition of local pose variables, ${\boldsymbol {\xi }}_k(t)$, between two discrete robot states. Bottom: Example of piecewise-linear inputs between two discrete robot states. The overall transition function between the two states $\boldsymbol{\Phi }_k(t_{k+1},t_k)$ is a product of the individual transition functions for each piecewise-linear segment.

Figure 2

Figure 3. Example scenarios using the proposed GP prior formulation, featuring angular velocity inputs (top) and angular acceleration inputs (bottom). Both scenarios highlight the resulting prior as well as the posterior, when considering an additional position measurement. In each case, the angular velocities as well as the rotation and position are plotted over time, including mean and $3\sigma$-covariance envelopes and using red, green, and blue colors for the $x$, $y$, and $z$ components, respectively. Estimation nodes are highlighted with diamonds. Additional renderings of the resulting trajectories are depicted on the right.

Figure 3

Figure 4. Left: Setup of the mobile robot localization experiment. Right: Ground-truth data of the mobile robot trajectory and landmark positions.

Figure 4

Table I. Hyperparameters used for mobile robot experiment.

Figure 5

Table II. Mobile robot trajectory estimation results considering odometry readings as measurements versus as inputs.

Figure 6

Figure 5. Example state estimation results using the first 25% of the mobile robot dataset. Trajectory estimates are depicted in black with ground truth in red. Landmarks are shown in blue. The discrete robot state is estimated every 5 s, coinciding with landmark distance measurements, while the continuous state relies on interpolation. On the top, the results for the newly proposed state estimation method are shown, using the odometry readings as velocity inputs. On the bottom, the conventional WNOA GP prior is used, considering the odometry as velocity measurements instead.

Figure 7

Figure 6. Tendon-driven continuum-robot prototype and experimental setup.

Figure 8

Table III. Continuum robot shape estimation results using priors with and without known actuation inputs.

Figure 9

Figure 7. Example continuum-robot state estimation results using prior formulations without and with known actuation inputs. Results are presented for both the full pose measurement (top) and the position-only measurement (bottom) of the continuum robot’s tip.