Abstract:Space-borne infrared remote sensing images have crucial application value in the fields of environmental monitoring and military reconnaissance. Nonetheless, due to limitations in technologies, atmospheric disturbances, and sensor noise, these images suffer from insufficient resolution and blurred texture details, severely restricting the accuracy of subsequent analysis and processing. To address these issues, a new super-resolution generative adversarial network model is proposed. This model integrates dense connections with the Swin Transformer architecture to achieve effective cross-layer feature transmission and contextual information utilization while enhancing the model's global feature extraction capabilities. Furthermore, we improve the traditional residual connections with multi-scale channel attention-based feature fusion, allowing the network to more flexibly integrate multi-scale features, thereby enhancing the quality and efficiency of feature fusion. A combined loss function is constructed to comprehensively optimize the performance of the generator. Comparative tests on different datasets demonstrate significant improvements with the proposed algorithm. Additionally, the super-resolved images exhibit higher performance in downstream tasks such as object detection, confirming the effectiveness and application potential of the algorithm in space-borne infrared remote sensing image super-resolution.