Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-06T08:50:18.447Z Has data issue: false hasContentIssue false

Attention-based deep learning networks for identification of human gait using radar micro-Doppler spectrograms

Published online by Cambridge University Press:  05 July 2021

Hannah Garcia Doherty*
Affiliation:
Thales Nederland B.V., Advanced Development, Delft, The Netherlands
Roberto Arnaiz Burgueño
Affiliation:
University of Oviedo, Oviedo, Spain
Roeland P. Trommel
Affiliation:
Thales Nederland B.V., Advanced Development, Delft, The Netherlands
Vasileios Papanastasiou
Affiliation:
Delft University of Technology, The Netherlands
Ronny I. A. Harmanny
Affiliation:
Thales Nederland B.V., Advanced Development, Delft, The Netherlands
*
Author for correspondence: Hannah Garcia Doherty, E-mail: hannah.garciadoherty@nl.thalesgroup.com
Rights & Permissions [Opens in a new window]

Abstract

Identification of human individuals within a group of 39 persons using micro-Doppler (μ-D) features has been investigated. Deep convolutional neural networks with two different training procedures have been used to perform classification. Visualization of the inner network layers revealed the sections of the input image most relevant when determining the class label of the target. A convolutional block attention module is added to provide a weighted feature vector in the channel and feature dimension, highlighting the relevant μ-D feature-filled areas in the image and improving classification performance.

Information

Type
Research Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Thales Nederland B.V., 2021. Published by Cambridge University Press in association with the European Microwave Association
Figure 0

Fig. 1. CBAM module. A spectrogram is fed into a 2D convolutional layer and outputs a {HxWxC} feature map, where both channel attention and spatial attention are performed. Lower left: Grad-CAM visualization [11] of model I without CBAM.

Figure 1

Fig. 2. Confusion matrices for model A (left) and model B (right).

Figure 2

Fig. 3. 2D t-SNE visualization of model A (left) and model B (right).

Figure 3

Fig. 4. CBAM confusion matrix.

Figure 4

Fig. 5. Upper: Input spectrogram. Lower saliency map.