High-resolution Radar Image Interpretation

 

I. Unsupervised SAR/ISAR Image Target Classification

Learning advanced semantic representations on unlabeled SAR/ISAR images with an unsupervised learning scheme can address the lack of labeled samples for radar recognition systems. Based on contrastive learning, in the MSTAR dataset, samples with the same category can automotive gather together and separate from the rest in the embedding space. The pre-trained encoder is capable of embedding raw SAR images into a more discriminative feature space, thus facilitating the convergence of the downstream SAR target classifier during the fine-tuning stage. We also transfer the learned knowledge from the MSTAR dataset to the OpenSARship dataset, which achieves better results than training from scratch [1]. Meanwhile, we integrate deformable convolution into the base encoder of the contrastive learning model to achieve higher precise classification of deformation ISAR images [2].

Fig. 1. Unsupervised training MSATR dataset with contrastive learning method and fine-tuning the learn knowledge to OpenSARship dataset [1].

Fig. 2. The optimization flow of the encoder in CLISAR-Net [2].

Related Publications

[1] Hao Pei, Mingjie Su, Gang Xu*, Mengdao Xing and Wei Hong, "Self-Supervised Feature Representation for SAR Image Target Classification Using Contrastive Learning," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 9461-9473, 2023, doi: 10.1109/JSTARS.2023.3321769.

[2] Peishuang Ni, Yanyang Liu, Hao Pei, Haoze Du, Haolin Li, and Gang Xu*, "CLISAR-Net: A Deformation-Robust ISAR Image Classification Network Using Contrastive Learning, " Remote Sensing, vol. 15, no. 1, p. 33, Dec. 2022, doi: 10.3390/rs15010033.