J. Electromagn. Eng. Sci Search

CLOSE


J. Electromagn. Eng. Sci > Volume 24(4); 2024 > Article
Nguyen, Doan, Pham, and Le: SRCNN: Stacked-Residual Convolutional Neural Network for Improving Human Activity Classification Based on Micro-Doppler Signatures of FMCW Radar

Abstract

Current methods for daily human activity classification primarily rely on optical images from cameras or wearable sensors. Despite their high detection reliability, camera-based approaches suffer from several drawbacks, such as low-light conditions, limited range, and privacy concerns. To address these limitations, this article proposes the use of a frequency-modulated continuous wave radar sensor for activity recognition. A stacked-residual convolutional neural network (SRCNN) is introduced to classify daily human activities based on the micro- Doppler features of returned radar signals. The model employs a two-layer stacked-residual structure to reuse former features, thereby improving the classification accuracy. The model is fine-tuned with different hyperparameters to find a trade-off between classification accuracy and inference time. Evaluations are conducted through training and testing on both simulated and measured datasets. As a result, the SRCNN model with six stacked-residual blocks and 64 filters achieves the best performance, with accuracies exceeding 95% and 99% at 0 dB and 10 dB, respectively. Remarkably, the proposed model outperforms several state-of-the-art CNN models in terms of classification accuracy and execution time on the same datasets.

I. Introduction

In the past few years, the immediate detection and accurate classification of human activities have become critical health concerns. Timely detection of abnormal activities that are potentially dangerous, such as falls due to stroke or myocardial infarctions, is especially crucial for the elderly and people with special needs [1]. Such activities can be monitored using wearable devices, cameras, and modern sensors. Although wearable devices like smartphones or smartwatches offer significant benefits, they can be inconvenient for users to wear continuously. In addition, vision-based devices may raise privacy concerns [2] and are ineffective in dark or foggy conditions. To address those issues, radar-based non-wearable devices offer a promising solution for non-contact monitoring of human movements, as they perform well in invisible environments.
When individuals engage in daily activities, the simultaneous movements of different body parts, such as swinging the arms and legs, rotating the body, and bending are categorized as micromotions, distinct from the overall translational motion of the body. These micromotions create the micro-Doppler (m-D) effect, an additional modulation in the Doppler frequency shift observed during movement. In the context of human motions, this effect arises from limb movements, leading to unique m-D signatures that correspond to specific activities, derived from the intricate motions of the torso and the limbs [3]. Researchers have extensively utilized and studied these m-D signatures, often represented in spectrograms, for the classification of daily activities and gait analysis [4, 5].
About a decade ago, the detection and classification of human activities using radar technology relied on machine learning (ML) algorithms that manually extracted features from m-D signatures in received radar signals [3]. Although these approaches achieved specific results, they often suffered from variable classification accuracy influenced by the expertise of the human analyst.
In recent years, deep learning has emerged as a leading technology for activity monitoring and recognition. Convolutional neural networks (CNNs) capable of automatic feature extraction, offer potential solutions to overcome the limitations of ML-based methods and achieve higher classification accuracy. For instance, in [6], two conventional classifiers (support vector machine and k-nearest neighbors) and a well-known CNN (GoogleNet) were utilized to recognize activities based on m-D signatures captured by a frequency-modulated continuous wave (FMCW) radar. However, these models exhibit high false alarm rates, with error rates of 21.75%, 22.85%, and 25.3%, respectively. In [7], various pre-trained models, such as AlexNet, VGGNet, and a custom-designed CNN, were explored for extracting and categorizing human activities. VGGNet, known for its deep network structure, achieved the highest accuracy of 95%. However, this improvement came at the cost of increased processing time and complexity.
Despite achieving higher recognition rates compared to ML-based classifiers, transfer learning has notable drawbacks. First, the model’s structure relies solely on the design of the chosen CNN and remains unadjusted to suit the input dataset. Consequently, the architecture might not be optimal for the given task, leading to potential issues of overfitting or underfitting. Second, the pre-trained model’s optimal weights may not be suitable for the new context and require updating with new training data. This can result in suboptimal performance for new data types, such as the m-D spectrogram of FMCW radar, and may restrict the model’s ability to generalize across different contexts. Therefore, carefully considering the chosen CNN architecture and updating the pre-trained model weights with the appropriate training data are necessary to ensure optimal performance in new contexts.
In recent years, self-custom design models have emerged as a novel approach for researchers. This approach involves adjusting the standard CNN structure, constructing lightweight models with simple forms, and designing improved models using contemporary techniques. Developing a customized deep-learning model necessitates balancing multiple factors, including accuracy, computation speed, noise sensitivity, and other factors. Accuracy, processing speed, and noise sensitivity are particularly prioritized among these factors. Self-designed models have gained popularity in various fields, such as signal modulation classification [8, 9], object classification [10, 11], text classification [12], and human activity classification [7, 13, 14]. However, their use in detecting human activities based on m-D spectrogram images from FMCW radar remains limited. To our knowledge, [7] and [14] represent advancements in this area, with [13] being the most recent. In [7] and [14], two custom-designed deep CNNs (DCNNs) were proposed to effectively address feature extraction and recognition issues for human activities using spectrogram images. However, these models showed lower accuracy compared to pre-trained models due to their simplistic structures, consisting only of a few convolutional layers connected directly to activation layers. In [13], a dense inception neural network (DINN) was introduced to classify 11 human indoor activities. DINN modified the architectures of Inception modules and utilized skip-connections from DenseNet to address gradient vanishing issues typical in CNNs with consecutive convolution and activation layers. The experimental results on a simulation dataset demonstrated that the self-custom design-based approach achieved the best balance in terms of classification accuracy and learnable parameters compared to the ML-based methods. However, DINN remains sensitive to noise. Therefore, this study proposes a novel customized CNN, the stacked-residual convolutional neural network (SRCNN), aimed at achieving high accuracy, low noise sensitivity, and consistent computation times for identifying human activities based on m-D signatures. The main contributions of this work can be succinctly summarized as follows.
First, the architecture of SRCNN is built using multiple advanced feature-extracting blocks known as stacked-residual blocks (S-R blocks), which incorporate novel designs inspired by im-res block, additional operation, and skip connections. These blocks utilize two parallel convolutional layers of different sizes (3 × 3 and 5 × 5), enabling the extraction of diverse extracted information from distinct spatial regions from the same input feature map. This approach can improve the classification accuracy of the network compared to conventional fixed-size filters. Moreover, the use of skip connections facilitates efficient reuse of earlier features that might otherwise be attenuated through the previous filtering stages. This strategy enhances the model’s accuracy while retaining a reasonable level of complexity and computation cost.
Second, fine-tuning the hyperparameters on two separate datasets (a noise-added dataset and a real dataset) demonstrated the efficiency of S-R blocks for multi-class action classification. The proposed model achieved the best classification results of 95% at −4 dB signal-to-noise ratio (SNR), 99% at 10 dB SNR, and 97% with the real dataset. SRCNN also outperformed other available CNNs in terms of accuracy and time consumption.
The rest of this paper is structured as follows: Section II describes the FMCW radar operation and the dataset collection. The proposed SRCNN structure is presented in Section III. Experimental and comparative results are discussed in Section IV. The final section provides the conclusion.

II. FMCW Radar and Dataset Description

1. Principles of the FMCW Radar

The diagram of an FMCW radar system is illustrated in Fig. 1, consisting of four main components. The waveform generator (WG) generates a control signal for a voltage-controlled oscillator (VCO) to produce an FMCW radar signal whose frequencies vary over time.
The signal is then divided into two paths: the first path goes to the transmit antenna to transmit to the free space, where targets may exist, while the other path is fed to the mixer at the receiver. The transmitted signal of the FMCW radar can be represented as follows [15]:
(1)
S(t)=Atcos (2π(f0t+ΔB2τt2)),
where, ΔB is the sweep bandwidth, τ is the duration of a chirp, At is the amplitude and f0 is the carrier frequency of the transmitted signal.
When the transmitted signal S(t) hits a target, the signal will reflect and propagate back to the receiving antenna of the radar. The received signal is expressed follows [15]:
(2)
R(t)=Arcos (2π(f0(t-td)+ΔB2τ(t-td)2)),
where, td=2(D0+vt)c is the time delay of the received signal, D0 is the distance from the target to the receiving antenna at t=0, v is the target’s moving velocity, and c is the speed of light.
At the receiver, the R(t) signal is mixed with the in-phase and quadrature (IQ)-phase components of the copy of the transmitted signal at the mixer and goes through a low pass filter (LPF) to obtain a baseband IQ signal for further processing [15]:
(3)
RIQ(t)=Amexp {2π(2f0D0c+(2f0·vc+2ΔB·D0τ)t)},
where Am=AtAr2 is the amplitude of the IQ signal obtained after the LPF.
If a person is the radar target, their activities can be classified based on the m-D signatures obtained from analyzing the m-D spectrogram of the IQ signal.

2. Micro-Doppler Effect and Data Processing

When people move, the simultaneous movement of different parts, such as swinging arms and legs, is defined as micromotions, in contrast to the translational motion of the body. These micromotions cause the m-D effect, which is considered an additional modulation in the Doppler frequency shift of the entire movement. Consequently, the m-D signatures in the time-frequency spectrum represent micromotions and, therefore, human activities. As such, time-frequency analysis is widely used to identify and classify these activities. Fig. 2 illustrates the procedure for processing the IQ signal of an FMCW radar into the time-frequency spectrogram. Specifically, the IQ signal is transformed into a 2D data matrix, with the first dimension representing fast-time bins and the second dimension representing slow-time bins. The range-time matrix is then obtained by applying the fast Fourier transform (FFT) along the fast-time direction of the matrix. A moving target indication filter with a specific cutoff frequency is applied to the range-time matrix to suppress returns from non-moving objects. Next, the short-time Fourier transform (STFT) algorithm is employed using a sequence of FFTs with short-sized windows and overlapping slides over the entire duration of the new range-time matrix to obtain a time-frequency spectrogram of the input IQ signal. The STFT spectrogram is given as follows [16]:
(4)
X(τ,ω)=STFT{x(t)}=-+x(t)w(t-τ)exp(-jωt)dt,
where x(t) is the input signal and w(tτ) is the window function. The squared absolute value of the complex STFT is defined as a spectrogram that contains m-D signatures and can be used to classify human activities.

3. Dataset Description

In this study, two datasets are used to evaluate the performance of the proposed SRCNN in classifying human activities based on m-D signatures. The first dataset is a noise-added dataset collected using the SimHumalator software [17]. The second is a real dataset published by the University of Glasgow [18].

3.1 Simulated dataset

The noise-added dataset was recorded using an FMCW radar with a carrier frequency of 24 GHz (K-band). The main parameters of the radar are configured with a chirp bandwidth of 400 MHz, a chirp duration of 1 ms, and 128 samples per chirp. The radar is positioned one meter above the ground, and the distance from the objects to the radar is 3 and 5 meters. Eleven different actions are performed and repeated 60 times at various aspect angles (−90°, −45°, 0°, 45°, 90°). The duration of each execution ranges from 8 to 15 seconds for each specific type of action. White Gaussian noise is added to the dataset with different SNR values ranging from −15 to 10 dB to simulate real conditions and increase the model’s challenge.

3.2 Real dataset

The real dataset was obtained using an FMCW radar sensor with a carrier frequency of 5.8 GHz (S-band), while the remaining parameters were configured in the same way as the previous dataset. It has a total of 1,632 measurements divided into six groups of indoor actions, including walking, drinking, falling, sitting, standing, and picking.

3.3 Data preprocessing

The measurement data obtained from both datasets are in the form of IQ signal and preprocessed according to the flow chart in Fig. 2. As a result, the noise-added dataset included 39,600 spectrogram images (11 actions × 2 distances × 5 aspect angles × 60 iterations × 6 noise levels), while 1,632 spectrograms were obtained for the real dataset. Fig. 3 presents the spectrogram images of “walking” and “walk to fall” in the noise-added dataset. At a noise level of −15 dB, the features are almost completely submerged in the noise background, making them difficult to observe and distinguish. With higher levels of noise, the spectral characteristics of the activities become clearer. Taking a more detailed observation, the m-D frequency fluctuation of the “walking” activity is only about −200 Hz to 200 Hz (Fig. 3(a)–3(f)), while for the “walk to fall” activity, this fluctuation is in the ranges of −400 Hz to 400 Hz (Fig. 3(g)–3(l)). This can be explained by the fact that human limbs tend to swing more vigorously at a much higher speed compared to the movement of the torso due to natural instincts when falling, leading to a sharp increase in m-D frequency fluctuations. Fig. 4 shows the spectrograms of the different activities in the real dataset.

III. Proposed SRCNN-based Activity Classification

In this section, we propose a novel SRCNN to classify human activities based on the m-D signatures of an FMCW radar. The overall structure of the proposed model (as presented in Fig. 5(a)) consists of three blocks: an input block, six S-R blocks, and an output block.
The SRCNN starts with the input block, which consists of an input layer and a conv-unit arranged sequentially. Specifically, the size of the input layer is 306 × 306 × 1, matching the size of the input spectrogram images. The conv-unit consists of a convolutional (conv) layer, a batch normalization (norm) layer, and an activation function (rectified linear unit [ReLu]). In detail, the convolutional layer uses 64 filters to create 64 corresponding feature maps. The convolution operation is given as follows:
(5)
y(i)=conv(x,c)=ix(i)c(i)+b,
where x is the input, c is the convolution coefficient, and b is the bias. The norm layer follows each conv layer to speed up the network learning process, represented by the following equation:
(6)
y^(i)=y(i)-μσ2+ɛ,
where y(i) is the output value of the conv layer normalized by the mean μ and variance σ2 over a mini-batch size and each input channel with, ɛ=10−5. The ReLu function is used to reduce the computation cost due to fast convergence.
As shown in Fig. 5(b), the S-R block begins with an im-res block, which uses two conv-units with different sizes of 3 × 3 and 5 × 5 arranged in a parallel structure (Fig. 5(c)). This design allows the im-res block to extract more diverse information from two separate spatial regions within the same input feature map. In addition, it ensures an optimal balance between classification accuracy and computational complexity when using more than two conv-units as opposed to conventional fixed-size filters in this sub-block. The output of the im-res block is combined by an addition operation, which sums up feature maps from the previous conv-unit. This operation can be expressed as follows:
(7)
Zadd1=C3×3(I/2)C5×5(I/2)I/2,
where ⊕ and C denote the addition operation and the convolutional operation, respectively, Zadd1 is the output of the addition layer and the im-res block, and I/2 is the input values to the im-res block.
The performance of human activity classification depends entirely on the m-D features presented in the input spectrogram images. These features often have low energy levels in the reflected signals from the micromotions of limbs compared to the whole body, making them weak features that can be easily deactivated through conv-units, leading to decreased classification accuracy. To address this issue, we propose using additional skip connections in S-R blocks, inspired by the residual block of ResNet [19]. Instead of applying the traditional residual module, the S-R structure of the S-R block facilitates relearning the former low-energy m-D features twice, thereby improving classification accuracy. In addition, this connection structure helps overcome the problem of vanishing gradients in the network. The output of the S-R block is described as follows:
(8)
Iout=C3×3(Zadd2),
with
(9)
Zadd2=C3×3(Zadd1)Zadd1I/2,
where I/2 is the input of the skip connection and Zadd2 is the result obtained through an addition operation from the skip connections and the forward propagation. As shown in Fig. 5(b), a max pooling (maxpool) layer with a pool size of 3 × 3 and a stride of (2, 2) is arranged at the beginning of each S-R block to down-sample the feature map size for subsequent blocks.
Finally, an output block, which contains a maxpool, a conv-unit, an average pooling (avgpool), a fully connected (FCN) layer, a softmax layer, and a classification (class) layer, is used to classify the action classes. The maxpool and avgpool are used consecutively to extract the largest and average features. The FCN layer has a total number of neurons equal to the number of activities in each given dataset for example, 11 action classes for the noise-added dataset and six action classes in the real dataset. The last two layers are the softmax and classification layers, where the softmax function is applied to generate decimal probabilities for all classes, which provide information to predict the appropriate class. The output of the softmax function can be presented as follows:
(10)
pi(x)=softmax{ri(x)}=eri(x)Σjerj(x),
where ri(x) is the i-th element of r(x), which is the output feature vector of the FCN layer. Finally, SRCNN predicts the specific action of an incoming signal x based on the class with the highest probability.
(11)
Predicted_action(x)=arg {max(pi(x))}.

IV. Experimental and Comparison Results

In this section, both datasets (simulation and real) are used to assess the activity classification performance of the proposed model. Initially, the noise-added dataset is used to investigate the impact of the number of S-R blocks and channels per convolution layer on the classification accuracy of the proposed model. Subsequently, the proposed model with refined hyperparameters is evaluated on three aspects, namely, classification accuracy, processing time, and complexity, using both datasets, after which it was compared with other CNNs.
The training and testing processes are carried out on a computer configured with an Intel Core i5-12400F 2.5 GHz processor, 32 GB of RAM, and an RTX 3060Ti GPU. For training, a batch size of 16, an initial learning rate of 0.01 and 20 epochs are used with the optimizer’s stochastic gradient descent. Moreover, five-fold cross-validation is employed on both datasets to evaluate and compare the performance of the proposed model with other models.

1. SRCNN Performance Evaluation

1.1 Impact of filter number on the proposed model’s performance

In this subsection, we first fix the model with 5 S-R blocks and change the number of filters in each convolutional layer to determine the most suitable number for the proposed model. Fig. 6 depicts the results for three metrics: accuracy, prediction time, and the number of learnable parameters when the number of filters is set to 32, 48, 64, and 96, respectively. The model using 64 channels achieves a good balance between accuracy and inference time for the activity classification task. Specifically, the accuracy of the model using 64 filters reaches 95.64%, which is significantly higher than with 32 and 48 filters and only marginally lower than with 96 filters. Although the accuracy with 96 filters is 95.71%, which is slightly higher than with 64 filters, it comes at the expense of computation time and complexity. The prediction time is 8.4 ms and the number of learnable parameters is 2.4 million (M).
A similar trend occurs when the number of S-R blocks is fixed at 4 and 6 (as shown in Table 1). As a result, using 64 filters for each convolutional layer provides the best trade-off for the proposed model.

1.2 Impact of S-R block number on the proposed model’s performance

In this experiment, to assess the effect of the number of S-R blocks on the accuracy of the proposed model, we fix the number of filter channels at 64. Next, the number of S-R blocks is changed incrementally from 3 to 7. The average classification accuracy of SRCNN as a function of SNR with different numbers of S-R blocks is indicated in Fig. 7.
Fig. 7 shows that the correct recognition rate increases with higher SNR levels and more S-R blocks. However, the recognition accuracy slightly degrades when the number of S-R blocks reaches 7. The average accuracy improves significantly with a small number of blocks (about 3.5% from 3 to 6 blocks) and slightly decreases with a large number of blocks (around 0.5% from 6 to 7 blocks). This finding suggests that using a large number of blocks may approach the limit of the network’s learning efficiency. Although the complexity of the model is lower when using 4 or 5 S-R blocks, the model is insufficiently deep, leading to inadequate clearly defined features for classifying each specific human action. For the best trade-off between accuracy and time consumption, we proposed using a model with 6 R-S blocks and 64 filter channels to compete with other existing models.
For further evaluation, the confusion matrix of SRCNN at 10 dB is reported in Fig. 8, which shows that all activities have correct recognition rates greater than 96%, with two notable actions, “walk to fall” and “walking,” achieving a classification accuracy of 100%. This result is due to the clearer separation of the spectral features of these two actions compared to the other actions considered in this study.

1.3 Impact of input size on the proposed model’s performance

Higher-resolution images contain more information and features, which will help the classifier achieve higher accuracy. However, this improvement comes at the cost of increased computational complexity, processing time, and memory requirements. In this subsection, we explore the impact of different input sizes specifically 612 × 612 × 1, 306 × 306 × 1, and 153 × 153 × 1 on the performance of the proposed model. The performances of the proposed model in terms of classification accuracy, training time, and prediction time for different image sizes are summarized in Table 2. It can be observed that the model with an input size of 306 × 306 × 1 significantly improves average classification accuracy compared to the model with an input size of 153 × 153 × 1 while showing a marginal difference compared to the model with an input size of 612 × 612 × 1.
In terms of training and prediction times, the model with an input size of 306 × 306 × 1 demonstrates robust processing time. Specifically, under identical training and testing conditions, the model with an input size of 306 × 306 × 1 achieves fast performance (192 minutes for training time and 6.5 ms for prediction time), which is slightly slower than model with an input size of 153 × 153 × 1 but considerably faster than the model with an input size of 612 × 612 × 1. Therefore, we select the input size of 306 × 306 × 1 for the proposed model.

2. Performance Comparison

This section presents a performance comparison between the proposed SRCNN and seven existing CNN models, including RepVGG [20], MobileNet-V2 [21], ResNet [19], DINN [13], DIAT-RadHARNet [22], ConvNeXt-T [23], and Xception [24], focusing on their ability to classify human activities on two concrete datasets (simulation with noise and real).

2.1 Overview of existing models

Among the seven models selected for comparison, Xception and MobileNet-V2 are well-known models published in 2017 and 2018, respectively. Known for their high classification accuracy in ImageNet datasets, these models are widely adopted in computer vision tasks via transfer learning. The next four networks, ConvNeXt- T, RepVGG, DINN, and DIAT-RadHARNet, are state-of-the-art models published between 2020 and 2022. ConvNeXt-T, a variant of the general ConvNeXt model that was announced in 2020, achieved significant classification results for the ImageNet dataset. RepVGG is a developed version of VGG network that applies the reparameterization technology to improve classification accuracy while significantly reducing processing time. DINN is a customized CNN inspired by dense connection, which provides the re-usability of loss features due to forward propagation during training for human activity recognition. DIAT-RadHARNet is also a lightweight DCNN model designed to classify the six suspicious activities. Finally, ResNet, originating from ImageNet and COCO 2015, is a renowned model and is regarded as the first version to use traditional residual blocks.

2.2 Comparison results on the noise-added dataset

Table 3 depicts the comparison results of the proposed model with other CNN models. DINN emerges as the fastest network, with an average prediction time of just 5.9 ms; however, its average accuracy is the lowest at approximately 62% compared to other networks. This result stems from DINN’s simple structure (with only 1.5 M learnable parameters) which leads to less efficient feature extraction from actions and potential confusion with features extracted from the added noise. Despite having the highest number of learnable parameters (≈27.8 M), ConvNeXt- T achieves a classification accuracy of only 75% and has the second-longest processing time (approximately 12 ms), slightly faster than DIAT-RadHARNet (15 ms). ConvNeXt-T adopts a structure similar to ResNet-50 but with specific modifications aimed at maximizing classification performance on the ImageNet dataset, which may not be optimal for the new spectral image datasets featuring m-D signatures with varying noise levels.
With diverse feature extraction capabilities and effective reuse of features from previous layers, our proposed model achieves the highest classification accuracy (over 96%) with the fewest learnable parameters (around 1.2 M) compared to DIAT- RadHARNet, MobileNet-V2, Xception, and ResNet, which have 2 M, 2.2 M, 20.8 M, and 23.5 M learnable parameters, respective ely. Furthermore, our model boasts an execution time of 6.5 ms, comparable to DINN and significantly faster than MobileNet-V2 (7.3 ms), ResNet (7.6 ms), Xception (9.8 ms), and DIAT-RadHARNet (15 ms).
The competition in average accuracy is evaluated across various noise levels, as shown in Fig. 9(a). The results indicate that DINN achieves the lowest accuracy, with only 40% at −15 dB and 83% at 10 dB. Notably, Xception exhibits the highest recognition accuracy at −15 dB due to its deepest structure (170 layers) and longest processing time (9.8 ms). However, as noise levels increase within the range of −10 to 10 dB, the proposed model consistently outperforms others in terms of accuracy, attributed to its optimized two-layer S-R block design. Remarkably, at −5 dB, the proposed model achieves a recognition accuracy of more than 95%, significantly surpassing all others.
In addition to the average accuracy parameter, the proposed model is also evaluated using three additional important metrics: precision, recall, and F1-score. Precision is defined as the ratio of the number of true positives (TP) among the points classified as positive (TP + FP). Recall is defined as the ratio of true positives (TP) to those actual positives (TP + FN). F1-score represents the harmonic mean of precision and recall. The parameter values are calculated as follows:
(12)
Accuracy=TP+TNTP+TN+FP+FN,
(13)
Precision=TPTP+FP,
(14)
Recall=TPTP+FN,
(15)
F1-score=2×Precision×RecallPrecision+Recall,
where TP, FP, TN, and FN are short for true positive, false positive, true negative, and false negative, respectively. The three metrics are presented in Fig. 9(b), 9(c), and 9(d), where the proposed model attains the highest accuracy and stability across varying noise levels.
To evaluate the accuracy improvement of the proposed SRCNN model that incorporates enhanced S-R blocks compared to other models, we extracted 11 features from 120 samples of four activities directly from the output of the FCN layer and plotted them on a 2D plane. The selected activities included grab, sit, walk to fall, and walk to sit. The distribution of these features is shown in Fig. 10. Intuitively, the clustering of extracted features from each model reflects its corresponding classification accuracy. Models with widely dispersed or overlapping features are more challenging to classify accurately, resulting in decreased accuracy. Conversely, activities with clearly clustered and separate features are easier to classify. Specifically, features extracted by DINN for the actions “walk to fall” and “walk to sit” exhibit a significant overlap of approximately 50% (violet and green points in Fig. 10(a)). This overlap complicates the accurate classification of these actions, resulting in a high false prediction rate and the lowest average accuracy observed in the DINN model.
Furthermore, we assessed the distribution of extracted features from SRCNN and ResNet. Fig. 10(e) shows the extracted features from ResNet, where features of grab, sit, and walk to fall are relatively scattered and overlap with each other, leading to confusion during classification. By contrast, Fig. 10(h) shows features extracted by the proposed SRCNN model, which are clearly clustered and separated without overlap. This distinction explains why the proposed model achieves significantly improved accuracy compared to models using traditional residual blocks.

2.3 Comparison results on the real dataset

To enhance confidence in our findings, we compared the proposed network with the others using real dataset results. Table 4 presents the numerical results, highlighting SRCNN’s superior recognition accuracy. Specifically, the proposed network achieves an accuracy of 96.63%, a precision of 96.87%, a recall of 96.79%, and an F1-score of 96.83%, thereby outperforming the other models considered. Remarkably, the proposed network featuring a two-layer residual connection structure significantly improves classification accuracy by 4.28% and reduces the average execution time by 1.1 ms compared to ResNet, which uses traditional residual connections.
Moreover, Table 4 includes classification results for specific activities on the real dataset, demonstrating the proposed model’s strong performance, with six activities achieving over 91% accuracy.

V. Conclusion

In this study, we proposed the SRCNN model with six S-R blocks and 64 filters allocated in each convolution layer for daily human activity detection and identification based on m-D signatures. The proposed model efficiently extracts and reuses features through a two-stacked residual structure. SRCNN exhibits outstanding recognition performance compared to seven other state-of-the-art networks across various SNRs. In addition, our model achieves the highest accuracy, precision, recall, and F1-score on both simulated and real datasets. Future works will focus on optimizing and verifying the proposed model through experimental measurement. Subsequently, the optimized model will be implemented in a real system for human activity classification applications.

Fig. 1
Principle of the FMCW radar.
jees-2024-4-r-235f1.jpg
Fig. 2
Data processing.
jees-2024-4-r-235f2.jpg
Fig. 3
The spectrogram of (a-f) walking and (g-l) walk to fall in the noise-added dataset at −15 dB, −10 dB, −5 dB, 0 dB, 5 dB, and 10 dB, respectively.
jees-2024-4-r-235f3.jpg
Fig. 4
The spectrograms of the different activities in the real dataset: (a) walking, (b) picking, (c) drinking, (d) falling, (e) sitting, and (f) standing.
jees-2024-4-r-235f4.jpg
Fig. 5
Overall structure of the proposed SRCNN. (a) SRCNN, (b) the S-R block, and (c) the im-res block.
jees-2024-4-r-235f5.jpg
Fig. 6
Performance of SRCNN with different numbers of filter channels.
jees-2024-4-r-235f6.jpg
Fig. 7
The average classification accuracy of SRCNN depends on SNR with different numbers of S-R blocks.
jees-2024-4-r-235f7.jpg
Fig. 8
The confusion matrix of SRCNN at 10 dB.
jees-2024-4-r-235f8.jpg
Fig. 9
Performance metrics of the SRCNN with other existing models at different SNRs: (a) accuracy, (b) precision, (c) recall, and (d) F1-score.
jees-2024-4-r-235f9.jpg
Fig. 10
Clustering representation of extracted features: (a) DINN, (b) RepVGG, (c) DIAT-RadHARNet, (d) ConvNeXt-T, (e) ResNet, (f) MobileNet- V2, (g) Xception, and (h) SRCNN.
jees-2024-4-r-235f10.jpg
Table 1
Average classification accuracy
Number of S-R blocks Average classification accuracy (%)

32 filters 48 filters 64 filters 96 filters
4 93.87 94.47 94.54 94.58
5 94.64 95.02 95.64 95.71
6 94.83 95.27 96.31 96.35
Table 2
Comparison results of SRCNN with different input image sizes
Image size Accuracy (%) Training time (min) Prediction time (ms)
612 × 612 × 1 96.76 614 16.4
306 × 306 × 1 96.31 192 6.5
153 × 153 × 1 85.83 139 4.2
Table 3
Comparison results of SRCNN with state-of-the-art CNN
Model Accuracy (%) Time (ms) Parameters (million)
DINN 62.41 5.9 1.5
RepVGG 90.84 7.9 24.6
MobileNet-V2 94.87 7.3 2.2
ResNet 94.99 7.6 23.5
Xception 95.48 9.8 20.8
DIAT-RadHARNet 83.31 15 2
ConvNeXt-T 75.26 12 27.8
SRCNN 96.31 6.5 1.2
Table 4
Comparison results of SRCNN with different state-of-the-art CNN models on the real dataset
Model Accuracy (%) Precision (%) Recall (%) F1-score (%)

Average Drinking Falling Picking Sitting Standing Walking
RepVGG 90.82 75.4 97.5 80.7 100 96.5 96.6 91.33 91.11 91.22
MobileNet-V2 93.58 86 97.5 84.2 94.8 100 100 93.98 93.75 93.87
ResNet 92.35 82.5 97.5 82.5 96.6 100 96.6 92.76 92.58 92.68
DINN 88.69 73.7 95 84.2 93.1 89.5 98.3 89.88 88.96 89.42
Xception 93.58 84.2 97.5 87.7 98.3 94.7 100 93.94 93.74 93.84
DIAT-RadHARNet 91.13 82.5 97.5 73.7 98.3 96.5 100 91.17 91.4 91.29
ConvNeXt-T 88.99 82.5 97.5 70.2 98.3 89.5 98.3 89.86 89.36 89.61
SRCNN 96.63 93 100 91.2 98.3 98.2 100 96.87 96.79 96.83

References

1. W. Li, T. H. M. Keegan, B. Sternfeld, S. Sidney, C. P. Quesenberry, and J. L. Kelsey, "Outdoor falls among middle-aged and older adults: a neglected public health problem," American Journal of Public Health, vol. 96, no. 7, pp. 1192–1200, 2006. https://doi.org/10.2105/AJPH.2005.083055
crossref pmid pmc
2. R. Igual, C. Medrano, and I. Plaza, "Challenges, issues and trends in fall detection systems," BioMedical Engineering OnLine, vol. 12, article no. 66, 2013. https://doi.org/10.1186/1475-925X-12-66
crossref pmid pmc
3. V. C. Chen, F. Li, S. S. Ho, and H. Wechsler, "Micro-Doppler effect in radar: phenomenon, model, and simulation study," IEEE Transactions on Aerospace and Electronic Systems, vol. 42, no. 1, pp. 2–21, 2006. https://doi.org/10.1109/TAES.2006.1603402
crossref
4. A. K. Seifert, M. G. Amin, and A. M. Zoubir, "Toward unobtrusive in-home gait analysis based on radar micro- Doppler signatures," IEEE Transactions on Biomedical Engineering, vol. 66, no. 9, pp. 2629–2640, 2019. https://doi.org/10.1109/TBME.2019.2893528
crossref pmid
5. Y. Kim and H. Ling, "Human activity classification based on micro-Doppler signatures using a support vector machine," IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 5, pp. 1328–1337, 2009. https://doi.org/10.1109/TGRS.2009.2012849
crossref
6. S. A. Shah and F. Fioranelli, "Human activity recognition: preliminary results for dataset portability using FMCW radar," In: Proceedings of the 2019 International Radar Conference; Toulon, France. 2019, pp 1–4. https://doi.org/10.1109/RADAR41533.2019.171307
crossref
7. F. A. Jibrin, A. Abdulaziz, A. S. Muhammad, A. D. Usman, and Y. Jibril, "Indoor human activity classification based on FMCW radar micro-Doppler signatures and deep-learning networks," In: Proceedings of 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS); Abuja, Nigeria. 2021, pp 1–5. https://doi.org/10.1109/ICMEAS52683.2021.9692418
crossref
8. T. Huynh-The, Q. V. Pham, T. V. Nguyen, T. T. Nguyen, D. B. da Costa, and D. S. Kim, "RanNet: learning residual-attention structure in CNNs for automatic modulation classification," IEEE Wireless Communications Letters, vol. 11, no. 6, pp. 1243–1247, 2022. https://doi.org/10.1109/LWC.2022.3162422
crossref
9. V. S. Doan, T. Huynh-The, and D. S. Kim, "Underwater acoustic target classification based on dense convolutional neural network," IEEE Geoscience and Remote Sensing Letters, vol. 19, article no. 1500905, 2020. https://doi.org/10.1109/LGRS.2020.3029584
crossref
10. I. Roldan, C. R. del-Blanco, A. Duque de Quevedo, F. Ibañez Urzaiz, J. Gismero Menoyo, and A. Asensio Lopez, "DopplerNet: a convolutional neural network for recognising targets in real scenarios using a persistent range–Doppler radar," IET Radar, Sonar & Navigation, vol. 14, no. 4, pp. 593–600, 2020. https://doi.org/10.1049/iet-rsn.2019.0307
crossref
11. H. Le, V. S. Doan, H. H. Nguyen, T. Huynh-The, K. Le-Ha, and V. P. Hoang, "Micro-Doppler-radar-based UAV detection using inception-residual neural network," In: Proceedings of 2020 International Conference on Advanced Technologies for Communications (ATC); Nha Trang, Vietnam. 2020, pp 177–181. https://doi.org/10.1109/ATC50776.2020.9255454
crossref
12. Y. Lan, Y. Hao, K. Xia, B. Qian, and C. Li, "Stacked residual recurrent neural networks with cross-layer attention for text classification," IEEE Access, vol. 8, pp. 70401–70410, 2020. https://doi.org/10.1109/ACCESS.2020.2987101
crossref
13. N. Nguyen, M. Pham, V. Le, D. DuongQuoc, and V. S. Doan, "Micro-Doppler signatures based human activity classification using dense-inception neural network," In: Proceedings of 2022 International Conference on Advanced Technologies for Communications (ATC); Ha Noi, Vietnam. 2022, pp 268–273. https://doi.org/10.1109/ATC55345.2022.9943046
crossref
14. Y. Kim and T. Moon, "Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks," IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 1, pp. 8–12, 2016. https://doi.org/10.1109/LGRS.2015.2491329
crossref
15. V. Winkler, "Range Doppler detection for automotive FMCW radars," In: Proceedings of 2007 European Microwave Conference; Munich, Germany. 2007, pp 166–169. https://doi.org/10.1109/EURAD.2007.4404963
crossref
16. J. Allen, "Short-term spectral analysis, synthesis, and modification by discrete Fourier transform," IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 25, no. 3, pp. 235–238, 1977. https://doi.org/10.1109/TASSP.1977.1162950
crossref
17. S. Vishwakarma, W. Li, C. Tang, K. Woodbridge, R. Adve, and K. Chetty, "SimHumalator: an open-source end-to-end radar simulator for human activity recognition," IEEE Aerospace and Electronic Systems Magazine, vol. 37, no. 3, pp. 6–22, 2022. https://doi.org/10.1109/MAES.2021.3138948
crossref
18. F. Fioranelli, S. A. Shah, H. Li, A. Shrestha, S. Yang, and J. Le Kernec, "Radar sensing for healthcare: Associate Editor Francesco Fioranelli on the applications of radar in monitoring vital signs and recognising human activity patterns," Electronics Letters, vol. 55, no. 19, pp. 1022–1024, 2019. https://doi.org/10.1049/el.2019.2378
crossref
19. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA. 2016, pp 770–778. https://doi.org/10.1109/CVPR.2016.90
crossref
20. X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, "RepVGG: making VGG-style ConvNets great again," In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; Nashville, TN, USA. 2021, pp 13733–13742. https://doi.org/10.1109/CVPR46437.2021.01352
crossref
21. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, "MobileNetV2: inverted residuals and linear bottle-necks," In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA. 2018, pp 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
crossref
22. M. Chakraborty, H. C. Kumawat, S. V. Dhavale, and A. Arockia Bazil Raj, "DIAT-RadHARNet: a lightweight DCNN for radar based classification of human suspicious activities," IEEE Transactions on Instrumentation and Measurement, vol. 71, article no. 2505210, 2022. https://doi.org/10.1109/TIM.2022.3154832
crossref
23. Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A ConvNet for the 2020s," In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; New Orleans, LA, USA. 2022, pp 11976–11986. https://doi.org/10.1109/CVPR52688.2022.01167
crossref
24. F. Chollet, "Xception: deep learning with depthwise separable convolutions," In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA. 2017, pp 1251–1258. https://doi.org/10.1109/CVPR.2017.195
crossref

Biography

jees-2024-4-r-235i1.jpg
NgocBinh Nguyen, https://orcid.org/0000-0002-7504-0520 received his B.Sc. and M.Sc. degrees in Electronics and Telecommunications from Telecommunication University, Khanh Hoa, Vietnam and Le Quy Don Technical University, Hanoi, Vietnam, respectively. He is now a PhD candidate at Le Quy Don Technical University, Hanoi, Vietnam. His current research interests include radar signal processing, imaging processing, and deep learning.

Biography

jees-2024-4-r-235i2.jpg
Van-Sang Doan, https://orcid.org/0000-0001-9048-4341 received the M.Sc. and Ph.D. degrees in Electronic Systems and Sevices from the Faculty of Military Technology, the University of Defence in Brno, Czech Republic in 2013, and 2016, respectively. He was awarded the Honors degree three times by the Faculty of Military Technology at the University of Defense in Brno, in 2011, 2013, and 2016. He served as a Postdoctoral Research Fellow at the ICT Convergence Research Center at Kumoh National Institute of Technology, Republic of Korea from 2019 to 2020. He is currently working at the Faculty of Communication and Radar, Vietnam Naval Academy, in Nha Trang City, Khanh Hoa Province, Vietnam. His research interests include radar, sonar, and communication systems, signal processing, and deep learning.

Biography

jees-2024-4-r-235i3.jpg
MinhNghia Pham, https://orcid.org/0000-0002-0732-0213 was born in 1980. He received his B.Sc. and M.Sc. degrees in Electronics Engineering from the Le Quy Don Technical University, Vietnam, in 2005 and 2008, respectively, and the Ph.D. degree in Information and Communications Engineering from Harbin Institute of Technology, China in 2014. He is currently a lecturer at Le Quy Don Technical University, Vietnam. He currently focuses on polarimetric synthetic aperture radar image processing, radar signal processing, deep learning and signal processing.

Biography

jees-2024-4-r-235i4.jpg
VanNhu Le, https://orcid.org/0000-0001-7023-4265 received his M.S. and Ph.D. degrees in the Harbin Institute of Technology in 2012 and 2016, respectively. After, he was a postdoc in the College of Optical Science and Engineering at Zhejiang University in Hangzhou City, Zhejiang, China from 2016 to 2018. Now, he is a lecturer at Le Quy Don Technical University. His research fields are fluorescence super-resolution microscopy, wavefront coding systems, image processing, optical design, radar signal processing, and deep learning.

ABOUT
ARTICLE CATEGORY

Browse all articles >

BROWSE ARTICLES
AUTHOR INFORMATION
Editorial Office
#706 Totoo Valley, 217 Saechang-ro, Yongsan-gu, Seoul 04376, Korea
Tel: +82-2-337-9666    Fax: +82-2-6390-7550    E-mail: admin-jees@kiees.or.kr                

Copyright © 2024 by The Korean Institute of Electromagnetic Engineering and Science.

Developed in M2PI

Close layer
prev next