Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-03-26T16:15:19.515Z Has data issue: false hasContentIssue false

A clustering-based method for estimating pennation angle from B-mode ultrasound images

Published online by Cambridge University Press:  01 March 2023

Xuefeng Bao
Affiliation:
Department of Biomedical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
Qiang Zhang
Affiliation:
Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina, Chapel Hill, NC, USA
Natalie Fragnito
Affiliation:
Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina, Chapel Hill, NC, USA
Jian Wang
Affiliation:
Snap Inc., New York, NY, USA
Nitin Sharma*
Affiliation:
Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina, Chapel Hill, NC, USA
*
*Author for correspondence: Nitin Sharma, Email: nsharm23@ncsu.edu

Abstract

B-mode ultrasound (US) is often used to noninvasively measure skeletal muscle architecture, which contains human intent information. Extracted features from B-mode images can help improve closed-loop human–robotic interaction control when using rehabilitation/assistive devices. The traditional manual approach to inferring the muscle structural features from US images is laborious, time-consuming, and subjective among different investigators. This paper proposes a clustering-based detection method that can mimic a well-trained human expert in identifying fascicle and aponeurosis and, therefore, compute the pennation angle. The clustering-based architecture assumes that muscle fibers have tubular characteristics. It is robust for low-frequency image streams. We compared the proposed algorithm to two mature benchmark techniques: UltraTrack and ImageJ. The performance of the proposed approach showed higher accuracy in our dataset (frame frequency is 20 Hz), that is, similar to the human expert. The proposed method shows promising potential in automatic muscle fascicle orientation detection to facilitate implementations in biomechanics modeling, rehabilitation robot control design, and neuromuscular disease diagnosis with low-frequency data stream.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1. A typical B-mode US image of the TA muscle. $ x $-axis is the distance along the probe’s longitudinal direction (the center line is the zero position, and left and right sides represent proximal and distal directions), and $ y $-axis is the depth of the TA muscle. The RGB data of every pixel were transferred to grayscale values between 0 and 255.

Figure 1

Figure 2. The segmentation of the upper pennate session of TA muscle, where muscle fascicle is in the “top” sub-image, and middle aponeurosis is in the “bottom” sub-image. These two sub-images are the two regions of interest (ROIs) to detect the muscle fascicle and middle aponeurosis, respectively.

Figure 2

Figure 3. (a) The definition of the tubular shape muscle fascicle orientation angle through clustering, (b) the criteria of reclustering to merge two clusters into one, and (c) each cluster is assigned a value.

Figure 3

Figure 4. The pipeline of the muscle fascicle detection and the details of the image processing for each step. The yellow line in the machine-labeled image represents the orientation of the cluster with the highest value in the top image, and the purple one is for the down image.

Figure 4

Figure 5. Showing how the augmentation filter improves the detection accuracy in the presence of (a) a high-value noise cluster and (b) an unclear cluster.

Figure 5

Figure 6. The designed viscous function. The vertical bars are samples, that is, the real incremental data. The Kernel function is the Kernel Density Estimation upon the samples using scikit-learn:

Figure 6

Figure 7. The accuracy of (a) the detection of the fascicle orientation with fine-tuned parameters using the DBSCAN clustering method and (b) the image flow with the viscosity function and the fine-tuned parameters. In (a), four sets of parameters were used individually on different frames to obtain accurate estimations. In (b), the four sets of parameters were used simultaneously on all frames, and the weighted average was taken to give the final estimation of the muscle orientations.

Figure 7

Table 1. Showing the algorithm flow of how to detect the PA of the data stream

Figure 8

Figure 8. The PA detection using our method with DBSCAN, K-means, and HAC clustering, and the benchmark methods, that is, UltraTrack and ImageJ. The data shown here are from the first participant at dorsiflexion angles 5° (Trial 1 and 3) and 15° (Trial 3). Also, the isometric ankle torques were plotted to show a high correlation with the PA sequences.

Figure 9

Table 2. Summary of PA detection results by using the proposed approach on one representative participant’s TA muscle during the ankle joint at different postures

Figure 10

Figure 9. The control panel and operation demonstration of using UltraTrack (optical flow).

Figure 11

Figure 10. (a) The highlighted ROI (the ROI selected for cropping of the TA muscle fascicles), macro-pre-processing ROI (the ROI in the image after the image went through the functions within the macro), and post DistributionJ processing of an US image frame (the ROI after the DistributionJ function that can be used to determine the orientation distribution). (b) The mean and standard deviation of the correlation coefficient between the ImageJ detection angle and the ground truth across different inclusion percentages.

Figure 12

Figure 11. Some representative tracking results. It also shows that even if the fascicles are correctly detected, there is still a considerable error in the PA calculation.

Figure 13

Figure 12. (a) The tracking error of the proposed method, and (b) the UltraTrack. In (a), the mean error for the three subjects are 0.03°, −0.55°, and −0.63°, respectively, while the standard deviation (the lines) are 0.70°, 0.94°, and 0.96°, respectively. In (b), the mean error for the three subjects are 0.41°, −1.40°, and −0.76°, respectively, while the standard deviation (the lines) are 1.10°, 1.38°, and 0.97°, respectively.

Supplementary material: PDF

Bao et al. supplementary material

Bao et al. supplementary material

Download Bao et al. supplementary material(PDF)
PDF 5.9 MB