Hostname: page-component-77f85d65b8-pztms Total loading time: 0 Render date: 2026-03-26T21:44:42.475Z Has data issue: false hasContentIssue false

An algorithm to reduce human–robot interface compliance errors in posture estimation in wearable robots

Published online by Cambridge University Press:  27 December 2022

Gleb Koginov*
Affiliation:
Sensory-Motor Systems Lab, Institute of Robotics and Intelligent Systems, Zürich, Switzerland MyoSwiss AG, Zürich, Switzerland
Kanako Sternberg
Affiliation:
Sensory-Motor Systems Lab, Institute of Robotics and Intelligent Systems, Zürich, Switzerland
Peter Wolf
Affiliation:
Sensory-Motor Systems Lab, Institute of Robotics and Intelligent Systems, Zürich, Switzerland
Kai Schmidt
Affiliation:
MyoSwiss AG, Zürich, Switzerland
Jaime E. Duarte
Affiliation:
MyoSwiss AG, Zürich, Switzerland
Robert Riener
Affiliation:
Sensory-Motor Systems Lab, Institute of Robotics and Intelligent Systems, Zürich, Switzerland Reharobotics Group, Spinal Cord Injury Center, Balgrist University Hospital, Medical Faculty, University of Zurich, Zürich, Switzerland
*
*Author for correspondence: Gleb Koginov, Email: gkoginov@ethz.ch

Abstract

Assistive forces transmitted from wearable robots to the robot’s users are often defined by controllers that rely on the accurate estimation of the human posture. The compliant nature of the human–robot interface can negatively affect the robot’s ability to estimate the posture. In this article, we present a novel algorithm that uses machine learning to correct these errors in posture estimation. For that, we recorded motion capture data and robot performance data from a group of participants (n = 8; 4 females) who walked on a treadmill while wearing a wearable robot, the Myosuit. Participants walked on level ground at various gait speeds and levels of support from the Myosuit. We used optical motion capture data to measure the relative displacement between the person and the Myosuit. We then combined this data with data derived from the robot to train a model, using a grading boosting algorithm (XGBoost), that corrected for the mechanical compliance errors in posture estimation. For the Myosuit controller, we were particularly interested in the angle of the thigh segment. Using our algorithm, the estimated thigh segment’s angle RMS error was reduced from 6.3° (2.3°) to 2.5° (1.0°), mean (standard deviation). The average maximum error was reduced from 13.1° (4.9°) to 5.9° (2.1°). These improvements in posture estimation were observed for all of the considered assistance force levels and walking speeds. This suggests that ML-based algorithms provide a promising opportunity to be used in combination with wearable-robot sensors for an accurate user posture estimation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press
Figure 0

Table 1. List of features used for the algorithm

Figure 1

Figure 1. Architecture and the operation principle of the Myosuit. (a) The Myosuit is a textile-based wearable robot to support the lower limbs. It is comprised of a textile harness that houses two motors, control electronics, and a battery. Two artificial tendons are routed from the motors posteriorly over the hip joint and anteriorly over the knee joint. Low-weight orthoses are placed on the user’s lower limbs to route and anchor the tendons. (b) The Myosuit supports the weight-bearing phase of walking. Here the mean and standard deviation of the forces measured during the experimental protocol and averaged across all participants and conditions are shown. The assisting forces are modulated based on the relative angle between the thigh and shank segments. The segment angles and walking events are estimated using a set of 9-axis IMUs mounted on the shank, thigh, and trunk segments of the user’s body.

Figure 2

Figure 2. Graphical representation of the study design. The participants were asked to walk at three levels of Myosuit assistance. For each of these levels, the participants walked in transparency mode at 0.8, and 1.3 m/s with Myosuit assistance turned on. In between each of these dynamic conditions, a static force ramping experiment was performed. For that, the participants were asked to stand still and a target force of 130 N was applied twice. The overall duration of the experiment was approximately 90 min, including the time for Myosuit donning and familiarization.

Figure 3

Figure 3. Marker placements from the front (a) and rear (b). Clouds of four and five markers were placed on the participant’s thigh and shank, respectively (highlighted in green). Clouds of four markers were placed on the thigh and shank components of the Myosuit (highlighted in orange). The choice of marker cloud sizes was driven by the initial sensitivity study where the chance of occlusion, marker loss, and marker stability were analysed. Additionally, markers were placed on the motor driving unit, left and right acromion, and the c7 vertebrae (highlighted in blue). (c) Angle convention for the shank and thigh segments in sagittal plane. The thigh angle (here $ {\gamma}_t $) is measured between the biological thigh and a vertical line passing through the knee joint’s centerline, with positive angles measured in the counter-clockwise direction. The shank angle (here $ {\gamma}_s $) is measured between the biological shank and the vertical line passing through the ankle joint’s centerline, with positive angles measured in the counter-clockwise direction. This angular convention was chosen as it matched the one used by the Myosuit controller.

Figure 4

Figure 4. Schematic representation of the implemented pipeline for compliance error compensation. Three main sources of data are used: motion capture of human segments (triangles, $ {y}_{human} $) and robot segments (circles, $ {y}_{robot} $) and robot-sensor derived data (rhombus). The latter and the $ {y}_{robot} $ are used to construct the feature vector for the gradient boosting algorithm. The $ {y}_{human} $ variable is used as the target variable. The data from the eight study participants are then arranged such that six participants are part of the training set, one is used for the validation set, and one for the model testing set. This splitting strategy was repeated eight times to show the model generalizability across the data of all of the study participants.

Figure 5

Table 2. List of tuned XGBoost hyperparameters used in the segment estimation algorithm

Figure 6

Figure 5. Compliance errors. ($ {y}_{human}-{y}_{robot} $) RMS errors for (a) thigh and (b) shank segments averaged across all gait cycles. The error bars represent ± 1 standard deviation.

Figure 7

Figure 6. Model results. (a,b) Comparison of the thigh segment angle errors. The compliance errors before and after the correction by the XGBoost models are displayed. (a) The averaged RMS and (b) the averaged maximum angle errors. The error bars represent ± 1 standard deviation. (c) Thigh segment angle throughout the gait cycle for participant E. The plot shows mean and standard deviation over $ n=2579 $ gait cycles averaged over all assistance and speed levels. The results of $ {\hat{y}}_{human} $ represent the performance of our algorithm tested on the data of participant E in a subject independent manner (i.e the particular participant’s data were not used for model training or validation).

Figure 8

Figure 7. Effects of force and speed. Plots showing the dependencies of the corrected and the uncorrected mean RMS errors on the assistance level used. Results for both 0.8 and 1.3 m/s speeds are shown on the left and the right-hand sides, respectively. The error bars represent ± 1 standard deviation.