J. Electromagn. Eng. Sci Search


J. Electromagn. Eng. Sci > Volume 22(4); 2022 > Article
Seo, Choi, Lim, and Park: A Deep Learning-Based Compact Weighted Binary Classification Technique to Discriminate between Targets and Clutter in SAR Images


The proposed approach is a deep learning-based compact weighted binary classification (DL-CWBC) method to discriminate between targets and clutter in synthetic aperture radar (SAR) images. A new modified cross-entropy error function is proposed to improve the probability of detection by controlling the rate of false alarms (FAs). The unique feature of a CWBC algorithm is reducing the FA rate and maximizing the probability of target detection without missing any target. For pre-processing, targets and clutter are detected through a constant false alarm rate (CFAR) as a conventional detection algorithm. These are then manually divided into two classes. The classified targets and clutter were trained through a ResNet-101 network. There is a trade-off between the minimization of the FA rate and the maximization of the detection probability for targets of interest (TOIs). The weighted coefficient of the modified cross-entropy error function tries to maximize the performance of this trade-off. In addition, the proposed approach enables us not to miss any targets by an extreme distinction decision. Above all, the DL-CWBC algorithm performs very well despite its simplicity.

I. Introduction

Synthetic aperture radar (SAR) [1] has been a very useful application to detect any interesting objects in any weather conditions. Unlike optical sensors, SAR can obtain images from a desired area, regardless of day, night, or weather conditions. It has been used mainly in the field of surveillance and reconnaissance for military purposes. Until recently, there were limitations in manually detecting and recognizing targets of interest (TOIs) due to data acquired in large quantities. To prevent this difficulty, progression in automatic target detection (ATD) technology [2] has been actively pursued. Therefore, automatic detection of TOIs is a very important process in SAR.
There have been many approaches [39] to detecting TOIs. The most popular approach is the constant false alarm rate (CFAR)-based algorithms, which use sliding windows. All the CFAR-based approaches are focused on the brightness and contrast of objects. The CFAR, via extended fractal (EF) features, has introduced another feature, such as the size of objects [10]. However, there are two critical problems with CFAR-based algorithms. First, one cannot distinguish between targets and clutter. Second, too many false alarms (FAs) are generated due to bright clutter. Therefore, CFAR-based algorithms require a discriminating procedure to reduce clutter. Several discrimination techniques have been proposed to remove clutter [1113]. The MIT Lincoln Laboratory introduced a prescreened stage to eliminate natural clutter and pass man-made objects [12]. Fifteen features are used in the discrimination algorithm. A rank-based feature selection scheme has been proposed to obtain a high-discrimination performance [13]. Recently, research has progressed [1416] to improve conventional CFAR. In ATD, the trade-off between the FA rate and the target detection probability is not an issue that is easily solved.
The advent of deep learning technology makes it easy to detect targets automatically [1720]. SAR target recognition based on convolutional neural networks (CNN) [17] is applied to MSTAR public datasets. ATD based on CNN has been researched in ship detection [18]. Another ship detection study was conducted with YOLOv2 (you only look once version 2) [19]. The detection of ocean internal waves has been analyzed and researched using faster regions with CNN (Faster R-CNN) [20]. However, these methods have been very limited in the use of different deep learning networks and in the difficult procedures of target detection.
A deep learning-based compact weighted binary classification (DL-CWBC) is proposed to separate and classify targets and clutter in SAR images. The proposed algorithm uses the ResNet-101 network of Microsoft Research [21]. For pre-processing, the conventional CFAR detects targets and clutter. The obtained clutter can be used directly as one class of a binary classifier. Nevertheless, targets should be obtained from ground truth because conventional CFAR cannot perfectly detect targets. Targets and clutter were trained as a binary classifier with ResNet-101. One of the biggest advantages of using the deep learning network for target detection is that there is no need for complicated clutter removal techniques. ResNet-101 automatically removes clutter, which generates many FAs. Through the well-learned ResNet-101 with sufficient data sets, the performance of deep learning is better than that of various CFAR-based algorithms. These are the two key ingredients of the proposed algorithm: one is a new, defined cross-entropy error function, which controls the trade-off between the FA rate and target detection, and the other is an extreme distinction decision. The proposed approach is to obtain the optimal goal of not missing targets and to obtain the optimal FA rate. The proposed approach has a detection rate similar to that of conventional CFAR algorithms. The main contribution of the DL-CWBC is dramatically improving the removal rate of the clutter when compared with the conventional CFAR with the discrimination algorithm. In addition, the DL-CWBC does not miss any targets from the ground truth.
The outline is organized as follows: a traditional CFAR-based algorithm is introduced in Section II, a ResNet-101 deep learning network is described in Section III, Section IV provides a description of a DL-CWBC technique to discriminate targets and clutter in SAR images, Section V demonstrates the performance of the proposed technique, and finally, the paper is concluded in Section VI.

II. A Conventional CFAR-Based Detection Algorithm

Traditional target detection algorithms for SAR images are usually based on CFAR. However, the algorithms should have a discrimination procedure to obtain the desired results. Therefore, the algorithms should have a two-stage process. One is a CFAR-based algorithm for detecting targets and clutter. The other is a discrimination technique for reducing clutter. First, a conventional CFAR algorithm is needed to detect all targets and clutter, including FAs. A conventional CFAR algorithm is shown in Fig. 1. In a SAR image, a rectangular sliding window inspects each pixel and detects pixels with bright intensity. In the central figure, the bright pixels are marked with a zoomed square box. The detailed structure of the CFAR window is shown in the figure on the right. The value of the detection feature in the CFAR is computed by:
where I[m,n] is the bright intensity of a pixel [m,n], and μ̂c[m,n] and σ̂c[m,n] are the mean and standard deviation of the clutter window surrounding a pixel [m,n].
There are three main steps to clustering bright pixels and detecting TOIs. The procedure for clustering bright pixels is shown in Fig. 2. First, the detected bright pixels are labeled. Second, a morphological operation enables close pixels to merge into one region. Finally, several regions formed in this way are clustered. This is the simple clustering and detecting procedure of TOIs in the conventional CFAR algorithm.
In addition, CFAR-based algorithms could include many other complicated procedures to detect targets. These might require other procedures to remove clutter. The discrimination algorithm might be needed to consider many features to eliminate clutter, such as standard deviation, fractal dimension, weighted fill rank ratio, etc. A discriminator using these features can decide whether it is a target or clutter. The details of this implementation are beyond the scope of this paper.

III. ResNet-101 Deep Learning Network

In this section, a deep learning network is presented. Fig. 3 shows the detailed layers of the ResNet-101 network. The numbers in the rectangular box indicate the dimensions and number of filters. In the first block, 7 × 7 indicates the height and width of a filter, 64 is the number of filters, and stride is 2. In the second block, 3 × 3 maxpooling is represented and stride is 2. The following blocks have similar structures: the total number of layers is 101. A detailed description is provided by He [21]. This paper uses a ResNet-101 network to classify targets and clutter, which are detected by CFAR and manually pre-categorized. Here, the ResNet-101 network uses a simple two-class classifier. The well-learned ResNet-101 network should have much better performance than the CFAR-based algorithms with a complicated discriminator using dozens of features.

IV. A Deep Learning-Based Compact Weighted Binary Classification Technique

In this paper, the main purpose of DL-CWBC is to reduce the FA rate and maximize the probability of target detection. The two unique features of a DL-CWBC are to control the trade-off between the FA rate and target detection and to use the extreme distinction decision. The DL-CWBC procedure is described in Fig. 4. There are three major stages in this paper. In the pre-processing stage, targets and clutter should be detected through the traditional CFAR algorithm. However, conventional CFAR cannot automatically distinguish between targets and clutter. Therefore, we manually separated them to use as a training set in the ResNet-101 network. The CFAR could not completely detect the targets. Therefore, the proposed approach should rely on ground truth to obtain all targets. For the training, targets and clutter were collected as the chips of the two classes. The well-learned ResNet-101 performed much better than the conventional CFAR. However, the conventional loss function could not completely resolve the problem of the trade-off between detection probability and the FA rate. In the second stage, the targets and clutter are trained with the ResNet-101 network. The cross-entropy error function is used to discriminate whether it is a target or clutter in an SAR image chip. The conventional cross-entropy error function is given by:
where t=1 is clutter and t=0 is a target, y(X,w) has the output value between 0 and 1. X is the input image, and w is the weighting vector from the deep learning network.
The last stage is the testing stage with the well-trained ResNet-101 network. In this paper, we proposed two unique features: a modified cross-entropy error function and an extreme distinction decision. In the traditional approach, the target and clutter are equally weighted for the error function. By training two classes of target and clutter, the false alarm rate that occurs in the traditional CFAR is largely eliminated. The main focus of this paper is not to miss any targets for detection. Therefore, we modified the cross-entropy error function to detect the targets perfectly. The modified cross-entropy error function is given by:
where 0 ≤ λ ≤ 1. λ is the weighting coefficient of the error function, which helps to reduce the FA rate and detect almost all TOIs without missing any. The reason why λ is less than 1 is to put more weight on the target side. However, using the weighted coefficient does not perfectly detect the targets. Now, an extreme distinction decision is declared for perfect target detection. The key idea of the extreme distinction decision is that it is considered a target if the probability of being a target is greater than 0. Finally, an extreme distinction decision is described as follows:
If 1-y(X,w)>0,it is considered as a target.
We might manually find an optimal λ. However, an optimal value of λ is determined by the ResNet-101 network. The proposed algorithm based on “simplicity” provides rigorous control of the trade-off between the false alarm rate and the probability of target detection.

V. Experiment Results

The DL-CWBC algorithm with or without extreme distinction decisions was developed, experimented with, and validated by a ResNet-101 network. First, the conventional CFAR algorithm detects targets and clutter simultaneously without discrimination. After the conventional CFAR, the targets and clutter are manually classified into two classes. Targets and clutter were trained through ResNet-101. The numbers of targets and clutter in training and testing sets are shown in Table 1. The number of targets and clutter in a training set are 3,441 and 79,136, respectively. The number of targets and clutter in a testing set is 788 and 11,604, respectively.
Table 2 shows the comparison between the performance of the traditional CFAR, that of CFAR with a discrimination algorithm, and that of the DL-CWBC algorithms. The CFAR detects 12,392 targets and clutter. The probability of the detection rate was 99.1%. However, there are many FAs. The CFAR could not distinguish whether it was a target or clutter. The detected clutter was not automatically removed. Through various discrimination algorithms, the FA rate can be partially improved. The feature selection of discriminating algorithms is not automatic. The number of features of clutter shown in SAR images should be defined properly. This may require a more complicated procedure. From the devised features from MIT Lincoln Laboratory [12], we found an optimal set of five dominant features: standard deviation, fractal dimension, mass, rotational inertia, and maximum CFAR. The probability of target detection was 99.3%. However, the probability of clutter removal was 57.4%. The conventional CFAR with a discrimination algorithm could obtain a high probability of target detection. Nevertheless, it still has many FAs. The proposed algorithm uses a deep learning network, which can automatically select and learn various features. The deep learning approach removes complicated discriminating procedures. In comparison, the equally weighted DL-CWBC of the target and clutter is shown with equal weight (λ=1). Without the extreme distinction decision, the probability of target detection is 97.6% and that of clutter removal is 99.8%. However, 19 targets were missed. To improve the performance of target detection, an extreme distinction decision was used. The result for the number of missing targets improved from 19 to 5. The removal rate of FA decreased to 1.6%. However, the FAs had pretty much been removed when compared with the CFAR. The main focus is not to miss any target through detection. The default value of λ is not enough, and an optimal value needs to be found.
There are still five missing targets. The λ could be also used as a hyper-parameter for the ResNet-101 network. There are so many values between λ= 0 to 1. Therefore, ResNet-101 is trained to detect all the targets at a step size of λ= 0.05. Nevertheless, Table 3 shows the performance of the DL-CWBC algorithm in two specific cases, λ= 0.5 and 0.1. In λ= 0.5, the probability of target detection and clutter removal is not much different from the equally weighted case. Missing targets are still 5 even with extreme distinction decision. However, there are missing targets at λ = 0.1. The probability of the removal rate of FA is degraded from 98.2% to 94.5%. Fig. 5 shows the results for the number of undetected targets and clutter versus the weighting coefficient of the modified cross-entropy error function.
At the top of Fig. 5, the plot of the number of undetected targets versus the value of λ are shown. The solid and dashed lines indicate the number of undetected targets versus λ, with or without the extreme distinction decision, respectively. Without an extreme distinction decision, there are dozens of undetected targets. After applying the extreme distinction decision, the overall number of undetected targets decreased. The optimal value of λ is found to be 0.1. The bottom of Fig. 5 shows the plot of the number of not-removed clutter versus the value of λ. The solid and dashed lines represent decisions with or without extreme distinctions, respectively. The results are automatically attributed to the above results from target detection. The big arrows in the figures indicate the trade-off between undetected targets and unremoved clutter. As undetected targets decrease in the top figure, unremoved clutter increases at the bottom.

VI. Conclusion

A DL-CWBC algorithm was developed, analyzed, and tested with through a ResNet-101 deep learning network with a modified cross-entropy error function. The extraordinary achievement of the proposed algorithm is due to its simplicity: not using a complicated discriminator combined with so many features of bright pixels. The key ingredient is to control the trade-off between the FA rate and target detection. Approximately 95% of the clutter among the detected objects was removed from the CFAR without any knowledge of clutter features. In addition, all the targets from the ground truth were perfectly detected with the extreme distinction decision. The proposed DL-CWBC algorithm proved to be simple and efficient for perfectly detecting targets and efficiently removing clutter.

Fig. 1
A conventional CFAR algorithm.
Fig. 2
The clustering and detecting procedure of TOIs in the conventional CFAR algorithm.
Fig. 3
ResNet-101 network.
Fig. 4
Procedure of the DL-CWBC technique.
Fig. 5
The number of target and clutter missed versus λ for the DL-CWBC technique with or without an extreme distinction decision (EDD).
Table 1
Number of training and test datasets
Data Targets Clutter
Training set 3,441 79,136
Testing set 788 11,604
Total 4,229 90,740
Table 2
Comparison between the traditional CFAR and DL-CWBC
DL-CWBC (λ =1)

Binary classification Decision CFAR MIT Without extreme distinction decision With extreme distinction decision
Target True 783 769 783
False 12,392 5 19 5
Clutter True 6,665 11,580 11,396
False 4,939 24 208
Detection rate (TOIs) (%) 99.1 99.3 97.6 99.4
Removal rate (FA) (%) NA 57.4 99.8 98.2
Table 3
Performance of the DL-CWBC algorithm in terms of λ

Binary classification Decision Without extreme distinction decision With extreme distinction decision
λ = 0.5 Target True 770 783
False 18 5
Clutter True 11,580 11,486
False 24 118
Detection rate (TOIs) (%) 97.7 99.4
Removal rate (FA) (%) 99.8 98.2
λ = 0.1 Target True 777 788
False 11 0
Clutter True 11,566 10,968
False 38 636
Detection rate (TOIs) (%) 98.6 100
Removal rate (FA) (%) 99.7 94.5


1. W. G. Carrara, R. S. Goodman and R. M. Majewski, Spotlight Synthetic Aperture Radar: Signal Processing Algorithms. Norwood, MA: Artech House, 2013.

2. P. Tait, Introduction to Radar Target Recognition. London, UK: Institution of Engineering and Technology, 2009.

3. H. Rohling, "Radar CFAR thresholding in clutter and multiple target situations," IEEE Transactions on Aerospace and Electronic Systems, vol. 19, no. 4, pp. 608–621, 1983.
4. A. Farina and F. A. Studer, "A review of CFAR detection techniques in radar systems," Microwave Journal, vol. 29, no. 9, pp. 115–128, 1986.

5. L. M. Novak, G. J. Owirka, W. S. Brower and A. L. Weaver, "The automatic target-recognition system in SAIP," Lincoln Laboratory Journal, vol. 10, no. 2, pp. 187–202, 1997.

6. G. B. Goldstein, "False-alarm regulation in log-normal and Weibull clutter," IEEE Transactions on Aerospace and Electronic Systems, vol. 9, no. 1, pp. 84–92, 1973.
7. L. M. Novak, G. J. Owirka and C. M. Netishen, "Performance of a high-resolution polarimetric SAR automatic target recognition system," Lincoln Laboratory Journal, vol. 6, no. 1, pp. 11–24, 1993.

8. L. M. Novak, S. D. Halversen, G. Owirka and M. Hiett, "Effects of polarization and resolution on SAR ATR," IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 1, pp. 102–116, 1997.
9. S. Blake, "OS-CFAR theory for multiple targets and nonuniform clutter," IEEE Transactions on Aerospace and Electronic Systems, vol. 24, no. 6, pp. 785–790, 1998.
10. L. M. Kaplan, "Improved SAR target detection via extended fractal features," IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 2, pp. 436–451, 2001.
11. Q. Pham, T. M. Brosnan and M. J. Smith, "A reduced false alarm rate CFAR-based prescreener for SAR ATR," In: Proceedings of the US Army Research Laboratory Sensors and Electron Devices Symposium; Adelphi, MD. 1997.

12. D. E. Kreithen, S. D. Halversen and G. J. Owirka, "Discriminating targets from clutter," The Lincoln Laboratory Journal, vol. 6, no. 1, pp. 25–52, 1993.

13. J. I. Park, S. H. Park and K. T. Kim, "New discrimination features for SAR automatic target recognition," IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 3, pp. 476–480, 2013.
14. G. Gao, L. Liu, L. Zhao, G. Shi and G. Kuang, "An adaptive and fast CFAR algorithm based on automatic censoring for target detection in high-resolution SAR images," IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 6, pp. 1685–1697, 2009.
15. L. Zeng, D. Zhou, Q. Pan, C. Lu and Y. Zhou, "SAR target detection based on PSIFT feature clustering," In: Proceedings of 2019 IEEE International Geoscience and Remote Sensing Symposium; Yokohama, Japan. 2019; pp 17–20.
16. W. Yu, Y. Wang, H. Liu and J. He, "Superpixel-based CFAR target detection for high-resolution SAR images," IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 5, pp. 730–734, 2016.
17. S. Chen and H. Wang, "SAR target recognition based on deep learning," In: Proceedings of 2014 International Conference on Data Science and Advanced Analytics (DSAA); Shanghai, China. 2014; pp 541–547.
18. M. Ma, J. Chen, W. Liu and W. Yang, "Ship classification and detection based on CNN using GF-3 SAR images," Remote Sensing, vol. 10, no. 12, article no. 2043, 2018; https://doi.org/10.3390/rs10122043 .
19. Y. L. Chang, A. Anagaw, L. Chang, Y. C. Wang, C. Y. Hsiao and W. H. Lee, "Ship detection based on YOLOv2 for SAR imagery," Remote Sensing, vol. 11, no. 7, article no. 786, 2019; https://doi.org/10.3390/rs11070786 .
20. S. Bao, J. Meng, L. Sun and Y. Liu, "Detection of ocean internal waves based on Faster R-CNN in SAR images," Journal of Oceanology and Limnology, vol. 38, no. 1, pp. 55–63, 2020.
crossref pdf
21. K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition," In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV. 2016; pp 770–778.


Seung Mo Seo received a B.S. degree from Hongik University, in 1998 and M.S. and Ph.D. degrees from The Ohio State University in 2001 and 2006, respectively, all in electrical engineering. From 1999 to 2006, he was a graduate research associate with the ElectroScience Laboratory (ESL), Department of Electrical and Computer Engineering, The Ohio State University, Columbus, where he focused on the development of fast integral equation methods. From 2007 to 2010, he was a senior engineer at the Digital Media and Communication (DMC) R&D Center, Samsung Electronics, where he developed an RF circuit and antenna design and simulation. From 2011 to the present, he has been a principal researcher at the Agency for Defense Development. From 2011 to 2016, he developed an anti-jamming satellite navigation system. His current research interests are synthetic aperture radar (SAR) and automatic target recognition (ATR). He is currently an IEEE senior member.


Yeoreum Choi received a B.S. and M.S. degrees from the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2015 and 2017, respectively. He is currently a researcher at the Agency for Defense Development (ADD), Daejeon, South Korea. His research interests include deep learning with synthetic aperture radar (SAR) imagery.


Ho Lim received a B.S. in electronics engineering from Soongsil University, Seoul, Korea, in 2006 and a Ph.D. from the Department of Electrical Engineering at KAIST, Daejeon, Korea, in 2011. Since 2011, he has been working at the Agency for Defense Development as a senior research engineer. His research interests include SAR-ATD and ATR.


Ji Hoon Park received B.S. and Ph.D. degrees from the Department of Electrical and Electronic Engineering, Korea Advanced Institute of Science and Technology in 2009 and 2015, respectively. From 2015 to the present, he has worked as a senior researcher at the Agency for Defense Development. His current research interest is automatic target detection and recognition with SAR imagery.
Share :
Facebook Twitter Linked In Google+
METRICS Graph View
  • 1 Crossref
  • 0 Scopus
  • 1,287 View
  • 81 Download
Related articles in JEES


Browse all articles >

Editorial Office
#706 Totoo Valley, 217 Saechang-ro, Yongsan-gu, Seoul 04376, Korea
Tel: +82-2-337-9666    Fax: +82-2-6390-7550    E-mail: admin-jees@kiees.or.kr                

Copyright © 2023 by The Korean Institute of Electromagnetic Engineering and Science.

Developed in M2PI

Close layer
prev next