Infrared aircraft few-shot classification method based on cross-correlation network
Author:
Affiliation:

1.Key Laboratory of Infrared System Detection and Imaging Technology,Chinese Academy of Sciences,Shanghai 200083,China;2.University of Chinese Academy of Sciences,Beijing 100049,China;3.Shanghai Institute of Technical Physics,Chinese Academy of Sciences,Shanghai 200083,China

Clc Number:

TP391.4

Fund Project:

Supported by the National Pre-research Program during the 14th Five-Year Plan(514010405)

  • Article
  • | |
  • Metrics
  • |
  • Reference [35]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit, a few-shot infrared aircraft classification method based on cross-correlation networks is proposed. This method combines two core modules: a simple parameter-free self-attention and cross-attention. By analyzing the self-correlation and cross-correlation between support images and query images, it achieves effective classification of infrared aircraft under few-shot conditions. The proposed cross-correlation network integrates these two modules and is trained in an end-to-end manner. The simple parameter-free self-attention is responsible for extracting the internal structure of the image while the cross-attention can calculate the cross-correlation between images further extracting and fusing the features between images. Compared with existing few-shot infrared target classification models, this model focuses on the geometric structure and thermal texture information of infrared images by modeling the semantic relevance between the features of the support set and query set, thus better attending to the target objects. Experimental results show that this method outperforms existing infrared aircraft classification methods in various classification tasks, with the highest classification accuracy improvement exceeding 3%. In addition, ablation experiments and comparative experiments also prove the effectiveness of the method.

    Reference
    [1] Ning C, Liu W, Wang X. Infrared Object Recognition Based on Monogenic Features and Multiple Kernel Learning[C]// 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC) . 2018: 204-208. 10.1109/icivc.2018.8492872
    [2] Chen R M, Liu S J, Miao Z, et al. Infrared aircraft few-shot classification method based on meta learning[J]. Journal of Infrared and Millimeter Waves, 2021, 40(4): 554-560.
    [3] Li W, Chen Q, Gu G, et al. Visible-infrared image matching based on parameter-free attention mechanism and target-aware graph attention mechanism[J]. Expert Systems with Applications, 2024, 238: 122038. 10.1016/j.eswa.2023.122038
    [4] Jin L, Liu S J, Wang X, et al. Infrared aircraft classification method with small samples based on improved relation network[J]. Acta Optica Sinica, 2020, 40(8): 0811005. 10.3788/aos202040.0811005
    [5] Luo X, Wu H, Zhang J, et al. A closer look at few-shot classification again[C]// Proceedings of the 40th International Conference on Machine Learning . 2023, 202: 23103-23123.
    [6] Li X, Yang X, Ma Z, et al. Deep metric learning for few-shot image classification: A Review of recent developments[J]. Pattern Recognition, 2023, 138: 109381. 10.1016/j.patcog.2023.109381
    [7] Shi B, Li W, Huo J, et al. Global- and local-aware feature augmentation with semantic orthogonality for few-shot image classification[J]. Pattern Recognition, 2023, 142: 109702. 10.1016/j.patcog.2023.109702
    [8] Hou R, Chang H, Ma B, et al. Cross Attention Network for Few-shot Classification[C]// Advances in Neural Information Processing Systems . 2019, 32.
    [9] Kang D, Kwon H, Min J, et al. Relational Embedding for Few-Shot Classification[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV) . 2021: 8802-8813. 10.1109/iccv48922.2021.00870
    [10] Li J. EACCNet: Enhanced Auto-Cross Correlation Network for Few-Shot Classification[C]// Knowledge Science, Engineering and Management . 2023, 14117: 354-365. 10.1007/978-3-031-40283-8_30
    [11] Kwon H, Kim M, Kwak S, et al. Learning Self-Similarity in Space and Time as Generalized Motion for Video Action Recognition[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV) . 2021: 13045-13055. 10.1109/iccv48922.2021.01282
    [12] Lee S, Lee S, Seong H, et al. Revisiting Self-Similarity: Structural Embedding for Image Retrieval[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . 2023: 23412-23421. 10.1109/cvpr52729.2023.02242
    [13] Wang L, Lei S, He J, et al. Self-Correlation and Cross-Correlation Learning for Few-Shot Remote Sensing Image Semantic Segmentation[C]// Proceedings of the 31st ACM International Conference on Advances in Geographic Information Systems . 2023: 1-10. 10.1145/3589132.3625570
    [14] Zhong Y, Su Y, Zhao H. Self-similarity feature based few-shot learning via hierarchical relation network[J]. International Journal of Machine Learning and Cybernetics, 2023, 14(12): 4237-4249. 10.1007/s13042-023-01892-9
    [15] Yang L, Zhang R-Y, Li L, et al. SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks[C]// Proceedings of the 38th International Conference on Machine Learning . 2021: 11863-11874. 10.1109/mlbdbi51377.2020.00079
    [16] Wen X, Cao C, Li Y, et al. DRSN with Simple Parameter-Free Attention Module for Specific Emitter Identification[C]// 2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom) . 2022: 192-200. 10.1109/trustcom56396.2022.00036
    [17] Tan S, Zhang L, Shu X, et al. A feature-wise attention module based on the difference with surrounding features for convolutional neural networks[J]. Frontiers of Computer Science, 2023, 17(6): 176338. 10.1007/s11704-022-2126-1
    [18] Webb BS, Dhruv NT, Solomon SG, et al. Early and Late Mechanisms of Surround Suppression in Striate Cortex of Macaque[J]. Journal of Neuroscience, 2005, 25(50): 11666-11675. 10.1523/jneurosci.3414-05.2005
    [19] Vinyals O, Blundell C, Lillicrap T, et al. Matching Networks for One Shot Learning[C]// Advances in Neural Information Processing Systems . 2016, 29.
    [20] Liu Q, Li X, Yuan D, et al. LSOTB-TIR: A Large-Scale High-Diversity Thermal Infrared Single Object Tracking Benchmark[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023: 1-14. 10.1109/tnnls.2023.3236895
    [21] He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . 2016: 770-778. 10.1109/cvpr.2016.90
    [22] Ravi S, Larochelle H. Optimization as a model for few-shot learning[C]// International Conference on Learning Representations . 2017.
    [23] Finn C, Abbeel P, Levine S. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks[C]// Proceedings of the 34th International Conference on Machine Learning . 2017: 1126-1135. 10.1109/icra.2016.7487173
    [24] Hou R, Chang H, MA B, et al. Cross Attention Network for Few-shot Classification[C]// Advances in Neural Information Processing Systems . 2019, 32.
    [25] Li K, Zhang Y, Li K, et al. Adversarial Feature Hallucination Networks for Few-Shot Learning[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . 2020: 13467-13476. 10.1109/cvpr42600.2020.01348
    [26] Chen Z, Ge J, Zhan H, et al. Pareto Self-Supervised Training for Few-Shot Learning[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) . 2021: 13658-13667. 10.1109/cvpr46437.2021.01345
    [27] Laenen S, Bertinetto L. On Episodes, Prototypical Networks, and Few-Shot Learning[C]// Advances in Neural Information Processing Systems . 2021, 34: 24581-24592.
    [28] Chen Y, Liu Z, Xu H, et al. Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning[C]//2021: 9062-9071. 10.1109/iccv48922.2021.00893
    [29] Qin Z, Wang H, Mawuli CB, et al. Multi-instance attention network for few-shot learning[J]. Information Sciences, 2022, 611: 464-475. 10.1016/j.ins.2022.07.013
    [30] Lazarou M, Stathaki T, Avrithis Y. Tensor feature hallucination for few-shot learning[C]// 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) . 2022: 2050-2060. 10.1109/wacv51458.2022.00211
    [31] Hu J, Shen L, Sun G. Squeeze-and-Excitation Networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 2018: 7132-7141. 10.1109/cvpr.2018.00745
    [32] Huang S, Wang Q, Zhang S, et al. Dynamic Context Correspondence Network for Semantic Alignment[C]//2019: 2010-2019. 10.1109/iccv.2019.00210
    [33] Ramachandran P, Parmar N, Vaswani A, et al. Stand-Alone Self-Attention in Vision Models[C]// Advances in Neural Information Processing Systems . 2019, 32.
    [34] Wang X, Girshick R, Gupta A, et al. Non-local Neural Networks[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . 2018: 7794-7803. 10.1109/cvpr.2018.00813
    [35] Woo S, Park J, Lee J-Y, et al. CBAM: Convolutional Block Attention Module[C]// Proceedings of the European conference on computer vision (ECCV) . 2018: 3-19. 10.1007/978-3-030-01234-2_1
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

HUANG Zhen, ZHANG Yong, GONG Jin-Fu. Infrared aircraft few-shot classification method based on cross-correlation network[J]. Journal of Infrared and Millimeter Waves,2025,44(1):104~112

Copy
Share
Article Metrics
  • Abstract:45
  • PDF: 66
  • HTML: 14
  • Cited by: 0
History
  • Received:March 29,2024
  • Revised:November 10,2024
  • Adopted:June 07,2024
  • Online: November 08,2024
Article QR Code