### I. Introduction

### II. A Measurement Image Translation-Automatic Target Recognition Technique

### 1. VGG Network based on CNN

### 2. Cycle Generative Adversarial Network

*G*and

*F*are mapping functions from one domain

*X*to the other

*Y*and vice versa, respectively.

*D*

*and*

_{X}*D*

*are discriminators between translated samples and real samples.*

_{Y}*X*and

*Y*are two domains for measurement and simulation images.

*x*~

*p*

*(*

_{data}*x*) and

*y*~

*p*

*(*

_{data}*y*) are data distributions of two domains, respectively. The key to GAN’s success is the idea of an adversarial loss that forces the generated images to be indistinguishable from real images. The other is cycle consistency loss, which is expressed as follows:

_{2}indicates

*L*

_{2}norm. Finally, the loss function of CycleGAN is written as follows:

*G*

_{X}_{→}

*translates from a measurement image to a simulation-like image. To ensure cycle consistency, the generated image is translated into a reconstructed image by a generator*

_{Y}*G*

_{Y}_{→}

*. The cycle consistency loss function tries to minimize the difference between a measurement and a reconstructed image. A discriminator*

_{X}*D*

*tries to force the generated images to be indistinguishable from real simulation images. The final goal is to translate measurement images into simulation-like image.*

_{Y}### 3. A Simulation-Like Image Translation

*X*and simulation image

*Y*are shown in Fig. 6(a) and 6(d), respectively. The translated image

*G*(

*X*) from the mapping

*G*:

*X*→

*Y*is shown in Fig. 6(b). The measurement image is translated into a simulation-like image. The reconstructed image

*F*(

*G*(

*X*)) is shown in Fig. 6(c). The consistence loss function between the original and the reconstructed image is minimized by

*L*

_{2}norm. The translated image

*F*(

*Y*) from the mapping

*F*:

*Y*→

*X*is shown in Fig. 6(e). For the consistency loss function, the reconstructed image is generated as

*G*(

*F*(

*Y*)) in Fig. 6(f). In this paper, measurement images are used as an input image

*X*. The translated image

*G*(

*X*) is a simulation-like image. To compute the consistency loss, the reconstructed image is generated as

*F*(

*G*(

*X*)). The loss function is rewritten as follows:

*G*makes a translated image close to the other domain. The discriminator

*D*

*compares the generated image*

_{Y}*G*(

*X*) with the original simulation image

*Y*to minimize the adversarial loss.

*N*is the number of pixels for a chip.

*M*

*and*

_{i}*S*

*are the*

_{i}*i*

*pixel intensity of the measurement and simulation images, respectively.*

^{th}*μ*

*and*

_{M}*μ*

*are the average values of the pixel intensity for the measurement and simulation images, respectively.*

_{S}### III. Numerical Experiments

*X*and

*Y*is shown in Table 4. Based on the proposed algorithm, the results of the target classification are given in Table 5. From the confusion matrix, the accuracy is approximately 80%. Fig. 8 shows the improvement and deterioration of each target class. Overall accuracy improved. The improvement of Class 5 is lower than those of the other classes. The reason is that the data of class 5 and the other classes are obtained from the heterogeneous SAR sensors.

*β*

_{1}=0.9,

*β*

_{2}=0.999, the learning rate

*μ*=10

^{−}

^{5}, and batch size is 16. In the top figure, the solid line indicates adversarial loss, and the dashed is cycle consistency loss. The bottom figure shows the CycleGAN loss. After 110 epochs, the error converged less than 1.0.