Hostname: page-component-89b8bd64d-ksp62 Total loading time: 0 Render date: 2026-05-08T04:05:32.859Z Has data issue: false hasContentIssue false

IMU as an input versus a measurement of the state in inertial-aided state estimation

Published online by Cambridge University Press:  03 January 2025

Keenan Burnett*
Affiliation:
University of Toronto Institute for Aerospace Studies, Toronto, ON, Canada
Angela P. Schoellig
Affiliation:
Technical University of Munich, Munich, Germany
Timothy D. Barfoot
Affiliation:
University of Toronto Institute for Aerospace Studies, Toronto, ON, Canada
*
Corresponding author: Keenan Burnett; Email: keenan.burnett@robotics.utias.utoronto.ca
Rights & Permissions [Opens in a new window]

Abstract

Treating inertial measurement unit (IMU) measurements as inputs to a motion model and then preintegrating these measurements have almost become a de facto standard in many robotics applications. However, this approach has a few shortcomings. First, it conflates the IMU measurement noise with the underlying process noise. Second, it is unclear how the state will be propagated in the case of IMU measurement dropout. Third, it does not lend itself well to dealing with multiple high-rate sensors such as a lidar and an IMU or multiple asynchronous IMUs. In this paper, we compare treating an IMU as an input to a motion model against treating it as a measurement of the state in a continuous-time state estimation framework. We methodically compare the performance of these two approaches on a 1D simulation and show that they perform identically, assuming that each method’s hyperparameters have been tuned on a training set. We also provide results for our continuous-time lidar-inertial odometry in simulation and on the Newer College Dataset. In simulation, our approach exceeds the performance of an imu-as-input baseline during highly aggressive motion. On the Newer College Dataset, we demonstrate state-of-the art results. These results show that continuous-time techniques and the treatment of the IMU as a measurement of the state are promising areas of further research. Code for our lidar-inertial odometry can be found at: https://github.com/utiasASRL/steam_icp.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. In this factor graph, we consider a case where we would like to marginalize several states out of the full Bayesian posterior. The triangles represent states and the black dots represent factors. This factor graph could potentially correspond to doing continuous-time state estimation with binary motion prior factors, unary measurement factors, and a unary prior factor on the initial state $\mathbf{x}_0$.

Figure 1

Figure 2. This figure depicts the results of our preintegration, which can incorporate heterogeneous factors. The resulting joint Gaussian factor in (26) can be thought of as two unary factors, one each for $\mathbf{x}_k$ and $\mathbf{x}_{k+1}$, and an additional binary factor between $\mathbf{x}_k$ and $\mathbf{x}_{k+1}$.

Figure 2

Figure 3. The estimated trajectories of IMU-as-input and IMU-as-measurement are plotted alongside the ground-truth trajectory, which is sampled from white-noise-on-jerk motion prior with $Q_c = 1.0$. Both methods were pretrained on a hold-out validation set of simulated trajectories.

Figure 3

Figure 4. This figure depicts 1000 simulated trajectories sampled from a white-noise-on-jerk (WNOJ) prior where $\check{\mathbf{x}}_{0} = [0.0\,0.0\,1.0]^T$, $\check{\mathbf{P}}_{0} = \textrm{diag}(0.001, 0.001, 0.001)$, $Q_c = 1.0$.

Figure 4

Figure 5. This figure compares the performance of the baseline IMU-as-input approach versus our proposed IMU-as-measurement approach that leverages a Gaussian process (GP) motion prior. The ground-truth trajectories are sampled from a white-noise-on-jerk (WNOJ) prior as shown in Figure 4. The IMU input covariance $Q_k$ for the IMU-as-input method was trained on a validation set, with a value of $0.00338$. The parameters of our proposed method was also trained on the same validation set, with resulting values of $\sigma ^2 = 1.0069$ and $\alpha = 0.0$. $R_{\textrm{pos}}$ for both methods was chosen to match the simulated noise added to the position measurements. Similarly, $R_{\textrm{acc}}$ for the IMU-as-measurement approach was set to match the simulated noise added to the acceleration measurements. In the bottom row, the chi-squared bounds have a different size because the dimension of the state in IMU-as-measurement is greater (it includes acceleration), and so the dimension of the chi-squared distribution increases, resulting in a tighter chi-squared bound.

Figure 5

Figure 6. This figure depicts 1000 simulated trajectories sampled from a singer prior where $\check{\mathbf{x}}_{0} = [0.0\,1.0\,0.0]^T$, $\check{\mathbf{P}}_{0} = \textrm{diag}(0.001, 0.001, 0.001)$, $\alpha = 10.0$, and $\sigma ^2 = 1.0$. A large value of $\alpha$ is intended to approximate a white-noise-on-acceleration (WNOA) prior.

Figure 6

Figure 7. This figure summarizes the results of our second simulation experiment where the ground-truth trajectories were sampled from a singer prior with $\alpha = 10.0$, $\sigma ^2 = 1.0$. The trained acceleration input covariance for IMU-as-input was $Q_k \approx 0.00283$. The trained singer prior parameters were $\alpha = 10.2442$ and $\sigma ^2 = 1.0074$. Note that even though the underlying prior changed by a lot from the first simulation to the second, from a white-noise-on-jerk prior to an approximation of a white-noise-on-acceleration prior, both estimators were able to remain unbiased and consistent.

Figure 7

Figure 8. This figure depicts a factor graph of our sliding-window lidar-inertial odometry using a continuous-time motion prior. The larger triangles represent the estimation times that form our sliding window. The state $\mathbf{x}(t) = \{\mathbf{T}(t), \boldsymbol{\varpi }(t), \dot{\boldsymbol{\varpi }}(t), \mathbf{b}(t) \}$ includes the pose $\mathbf{T}(t)$, the body-centric velocity $\boldsymbol{\varpi }(t)$, the body-centric acceleration $\dot{\boldsymbol{\varpi }}(t)$, and the IMU biases $\mathbf{b}(t)$. The gray-shaded state $\mathbf{x}_{k-2}$ is next to be marginalized and is held fixed during the optimization of the current window. The smaller triangles are interpolated states that we do not directly estimate during the optimization process. Instead, we construct continuous-time measurement factors using the posterior Gaussian process interpolation formula. We include a unary prior on $\mathbf{x}_{k-2}$ to denote the prior information from the sliding-window filter.

Figure 8

Figure 9. This figure depicts a factor graph of our baseline approach that uses IMU measurements to de-skew the pointcloud and to form relative motion priors using classic preintegration. The larger triangles represent the estimation times that form our sliding window. The state $\mathbf{x}(t_k) = \{\mathbf{C}_{iv}(t_k), \mathbf{r}_i^{vi}(t_k), \mathbf{v}^{vi}_i(t_k), \mathbf{b}(t_k) \}$ includes the orientation and position in a global frame, the velocity in a global frame, and the IMU biases. The gray-shaded state $\mathbf{x}_{k-2}$ is next to be marginalized. The smaller triangles are extrapolated states that we do not directly estimate during the optimization process. Instead, we extrapolate for these states using IMU integration starting at an estimated state. The factor graph includes a unary prior on $\mathbf{x}_{k-2}$ to denote the prior information from the sliding-window filter.

Figure 9

Figure 10. This figure depicts an example lidar pointcloud produced by our simulation, which contains motion distortion. The pointcloud is colored based off which wall the lidar point is reflected.

Figure 10

Figure 11. This figure depicts the results of our lidar-inertial simulation where the ground-truth position (dashed line) is compared to the position estimated by Singer-LIO colored by the absolute position error. This trajectory is an example of one of the slow sequences.

Figure 11

Table I. Simulation parameter ranges for the different motion regimes.

Figure 12

Table II. Simulation results. Root mean squared absolute trajectory error (m). For each speed category (slow, medium, and fast), 20 randomized sequences were created. The results in this table are the overall root mean squared absolute position error across 20 sequences, for each approach.

Figure 13

Table III. Newer College Dataset results. Root mean squared absolute trajectory error (m). Estimated trajectory aligned with ground truth using Umeyama algorithm. * uses loop closures, results obtained from [21], uses camera images.

Supplementary material: File

Burnett et al. supplementary material

Burnett et al. supplementary material
Download Burnett et al. supplementary material(File)
File 283.5 KB