Dynamic Infrared Target Dataset Based on Remote Sensing Data and Physical Modeling
doi: 10.11972/j.issn.1672-8785.2025.11.003
ZHOU Xiao-xuan1 , LI Li-yuan2 , HU Zhuo-yue1 , RAO Peng1 , LIN Chang-qing1 , CHEN Fan-sheng1,2 , SUN Sheng-li1
1. Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083 , China
2. Institute of Optoelectronics, Fudan University, Shanghai 200433 , China
Funds: Supported by Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDB0580000) ; National Natural Science Foundation of China (No. 62575297 & 62505342)
Abstract
The rapid development of artificial intelligence has significantly enhanced the potential for target detection and tracking, but the lack of high-quality dynamic infrared datasets has limited the target perception capabilities of space-based staring detection systems. This paper proposes a construction scheme that integrates remote sensing data with physical modeling to generate a multi-frame, three-band infrared dataset containing dynamic clouds, aircraft, and ships. The cloud motion vector fields are retrieved from high-temporal-resolution geostationary observations and mapped onto the high-resolution background of the SDGSAT-1 thermal infrared spectrometer (TIS) to generate dynamic cloud fields. Simultaneously, the motion trajectories and radiation characteristics of aircraft and ships are generated based on historical TIS observations and simulation models. This dataset can be used to train and evaluate target detection and tracking models and provides data support for infrared system performance evaluation.
0 Introduction
Space-based infrared remote sensing plays a key role in all-day, integrated air and sea surveillance. However, due to limitations in spatial resolution, complex cloud fields, and image link degradation, achieving stable detection and robust tracking under low signal-to-noise and signal-to-clutter ratios remains challenging. However, the development cost and engineering difficulty of high-resolution, wide-bandwidth, staring systems are high. There is an urgent need to clarify the detection limits under controllable, realistic conditions and provide mission-oriented data support foralgorithm iteration.
Over the past five years, research on infrared small target detection has developed rapidly driven by deep learning, focusing on the multimodal features and structural priors of R-CNN [1-2], YOLO [3-4], Transformer [5-6], Mamba [7-8] and other networks for improvement. High-quality datasets play a key role in this, providing rich training and verification data for the algorithm model. They establish the foundation for machine learning models to acquire multi-spectral features, target contours, and motion patterns, thereby significantly enhancing system robustness and accuracy in challenging environments characterized by strong clutter and low contrast.
However, existing infrared datasets still have significant limitations, mainly in terms of lack of dynamics and authenticity. On the one hand, most aircraft/ship datasets are based on Google Earth or high-resolution series of visible/near-infrared images. They have clear morphology but are mostly single-frame static,for example, the UOW-Vessel dataset [9], DMFGRS [10] and FGSC-23 [11]. Most aircraft datasets focus on aircraft that stop at airports [12], such as HRPlanesV2 [13], OPT-Aircraft _v1.0 [12], and MTARSI [14]. The parallax effect of Sentinel-2 Multi-Spectral Imager (MSI) enables it to observe aircraft flying during the day. Segundo M P et al. constructed a flight aircraft dataset [15], but it is difficult to cover nighttime detection scenarios. Moreover, single-frame data cannot fully simulate the continuous motion trajectory of the target and the changes in the earth's background. On the other hand, due to the high difficulty of data acquisition and the small number of publicly available datasets, only a few studies have constructed aircraft [16-17] and ship [18] data based on thermal infrared remote sensing.
To make up for the lack of data, some studies have tried to use real data and physical simulation to synthesize video-like infrared samples for expanded training and algorithm evaluation. For example, Westlake S T et al. combined real data with simulation to obtain a long-wave infrared ship dataset of video sequences, which was used to supplement training data and improve the detection accuracy of the algorithm [19]; Jakubowicz J et al. combined aircraft infrared signal simulation to generate an aircraft dataset under a white noise background during the day for detection testing [20]; Li Z X et al. used measured long-wave infrared satellite images as background images and cruising civil aircraft as simulation targets to construct the IRAir dataset and test the algorithm performance [21]. The above studies have contributed valuable methods and data to the analysis of infrared detection performance and algorithm research, but they did not take into account the changes in target conditions and the earth background during flight, and the single-channel data lost multi-spectral information, which is particularly important for the detection of weak targets at low resolution.
Therefore, in order to solve the problem of scarcity of high-resolution infrared multi-frame and multi-spectral remote sensing data, this paper proposes a method tointegrate remote sensing data with physical modeling. The cloud motion vector is estimated using minute-level thermal infrared observation data from Himawari-8/9 geostationary satellites and mapped to the high-resolution background provided by SDGSAT-1 TIS to generate a dynamic cloud field. Based on historical observation data and simulation models, the motion trajectories of aircraft and ships are generated while retaining multi-spectral information. Based on this, a three-band multi-frame thermal infrared aircraft/vessel dataset of approximately 150 GB was constructed. It comprises 2, 400 images each measuring100 km×100 km with 30 m resolution, covering seven typical land surface backgrounds – clouds, oceans, deserts, snow-capped mountains, grasslands, forests, and urban areas – along with eight categories of maritime and aerial targets under varying operational conditions and scales.This dataset provides controllable and realistic data support for the development of small target detection and tracking algorithms, infrared system performance evaluation, and detection limit analysis.
1 Dataset construction method
1.1 Analysis of target radiation characteristics
This dataset uses real aircraft and ship samples obtained by SDGSAT-1 TIS as a reference and combines actual measurements with theoretical models to generate simulation targets.
In the long-wave thermal infrared (8–12.5 μm) band, the surfaces of aircraft and ships are usually regarded as opaque gray bodies with uniform emissivity. The emissivity of fuselage/hull coating materials is usually 0.85–0.98, and the emissivity of sea surface is about 0.98–0.99. The spectral radiance of the target at the wavelength λ can be expressed as
Ltarget (λ)=εtarget c1πλ51ec2/λTtarget -1
(1)
Where εtarget is the infrared emissivity of the target surface; the radiation constant c1=2πhc2≈3.743×10-6 W·m2, c2= hck≈1.4387×10-2 W·K (h is Planck's constant, c is the speed of light, k is the Boltzmann constant) ; Ttarget is the surface temperature of the target. When an aircraft flies near the top of the troposphere, the equilibrium temperature of its fuselage surface is dominated by the ambient temperature and is usually about 0 to 10 K lower than the earth's background temperature [22]. Ships are affected by structural materials, operating conditions, and residual heat from the sun on the deck. Compared with the relatively constant sea surface temperature, they usually show a diurnal variation characteristic of "high during the day and low at night" [23].
The target signal received by the detector also needs to consider the influence of atmospheric transmission. The target radiance reaching the top of the atmosphere can be expressed as
LTOA_Target (λ)=Ltarget (λ)τatm (λ)+Lpath (λ)
(2)
Where τatm represents the atmospheric transmittance; Lpath represents theatmospheric path radiation.
In remote sensing images, the signal is then mapped to digital number (DN) values through optical transmission, integration time, and noise chain:
Lλi=aiDNi+bi
(3)
Where DNi represents the value of the signal in the i-th spectral band converted to a digital number by the analog-to-digital converter, that is, the grayscale value in the remote sensing image; ai and bi denote the gain coefficient and bias coefficient of the i-th spectral band.
Statistical results from dataset TIFAD.v1 for 21, 044 samples indicate [17] that aircraft flying at high altitudes exhibit negative contrast targets, meaning their radiance is lower than that of the background. In the three spectral bands, the DN value differences between the target and the background are 0.84% to 12.96%, 2.64% to 15.16%, and 1.95% to 11.90%, respectively. Overall, the B2 band (10.3 to 11.3 μm) has the strongest target detection capability. Considering the atmospheric transmission effect, the transmittance of the B1 (8 to 10.5 μm) and B3 (11.5 to 12.5 μm) bands is more sensitive to altitude than the B2 band. High-altitude targets show smaller cross-spectral differences, while ground targets have larger inter-spectral differences, which is also an important factor in achieving high-altitude target discrimination. At the same time, since it is almost unaffected by solar reflection, the thermal imager shows equivalent detection capabilities for aircraft targets during the day and at night.
According to the statistical results of the ship data set TISD, the temperature difference between the ship target and the sea surface is about −5 to-0.4 K and 0.3-6 K. Targets with excessively low temperature differentials are difficult to detect reliably [18].
Figure1 shows the real and simulated aircraft and ship targets in the scene, as well as the changes in DN values near their center positions in three spectral segments.
Fig.1Comparison between simulated targets and actual measured targets.
In summary, based on the joint constraints of the physical model and sample statistics, the target three-channel DN distribution is reasonably set to keep it consistent with the real observation in terms of radiation intensity, inter-spectral differences and day-night time series characteristics.
1.2 Dynamic background simulation
For the cloud background, cloud detection is first performed to generate a cloud mask. Then, two Himawari-8/9 AHI thermal infrared images (with a spatial resolution of approximately 2 km and a time interval Δt of 10 minutes) before and after the imaging moment are selected to estimate the cloud optical flow field within the area limited by the cloud mask [24]. The obtained displacement field is mapped/interpolated onto the target high-resolution background, thereby synthesizing a cloud field sequence that is continuous in time and space at high resolution.
Assume that the two frames of thermal infrared images are I (x, y, t) and I (x, y, t+dt) , respectively. Assume that the radiation brightness of the same physical point is approximately constant in a short time and has a small displacement, that is,
I(x,y,t)I(x+dx,y+dy,t+dt)
(4)
Performing a first-order Taylor expansion on the right-hand side of equation (4) yields the classical optical flow constraint equation:
fxu+fyv+ft=0fx=fx;fy=fyu=dxdt;v=dydt
(5)
Where fx and fy are the image gradients; ft is the time varying gradient. Since there is only one constraint for a single pixel, the Lucas-Kanade method is used to solve for (u, v) :
uv=i fxi2i fxifyii fxifyii fyi2-1u-i fxifti-i fyifti
(6)
To accommodate larger displacements, a multi-scale pyramid and layer-by-layer warping refinement are used; a Gaussian kernel is used for gradients and weights during the solution to improve robustness. The resulting pixel displacement (u, v) is converted into velocity and wind direction based on the ground sampling spacing Δs and time interval Δt:
v=u2+v2ΔsΔt
(7)
θ=atan2(-v,u)
(8)
In reality, brightness constancy in thermal infrared cloud images is only approximately true, and actual clouds can vary in thickness, height, and dispersion. To address this, two perceptual consistency corrections are introduced during the synthesis phase:
(1) For the optical flow guided images of high-speed clouds (moving speed greater than 25 km/h) , a diffusion term related to the speed is superimposed; the cloud area is subjected to velocity-time coupled Gaussian diffusion or small anisotropic smoothing to simulate morphological diffusion.
(2) Apply time decay to small-scale cloud patches (connected domain area less than 100 pixels) to simulate the evolution and dissipation of weak cloud bodies.
Finally, the displacement-corrected and intensity-consistent moving clouds undergo grayscale adjustment and are overlaid onto nighttime scene images of the same region, generating a dynamic cloud background sequence with natural edges and continuous motion (see Figure2) .
Fig.2Dynamic scene sequence generation flowchart.
1.3 Dynamic scene generation
The simulated target radiation is then fused with the dynamic background generated by either ground-truth measurements or reanalysis. First, a binary mask is constructed based on the target's geometric shape, converting the target's radiant brightness in the object space to DN values and mapping them onto the imaging grid. Subsequently, the target image undergoes degradation and superposition with the regional background image according to the imaging model, yielding an infrared scene sequence incorporating the moving target (see Figure3) . First, a binary mask is constructed based on the target geometry. The target's object-space radiance is converted to a DN value and mapped onto the imaging grid. This mask is then degraded and superimposed with the regional background image according to the imaging model to produce an infrared scene sequence containing the moving target (see Figure3) .
Thermal infrared imaging involves the coordinated integration of optomechanical, electrical, and detector chains, including signal readout circuitry, focal plane detectors, cryogenic optical systems, and the platform and attitude control system. These system interactions can introduce numerous factors that affect image quality, such as focal length offset, optical aberrations, and detector filtering characteristics. When simulating scenes detected by staring array detectors, focal plane non-uniformity and platform jitter must also be considered.
The infrared imaging model can be expressed as
I(x,y)=O(x,y)H(x,y)+n(x,y)
(9)
Where, O (x, y) is the real scene in the object space; H (x, y) is the degradation function of the imaging system; n (x, y) is the system noise; I (x, y) is the output degraded image.
Thermal infrared imagers may exhibit slight jitter during the integration period, causing specific pixels to potentially receive radiation brightness information from neighboring pixels. This leads to pixel confusion and consequently affects image clarity. Under this jitter condition, the object-image relationship of the imaging system can be described as
gx0,x0=1Tint 0Tint fx0+x(t),y+y(t)dt
(10)
Where x0and y_0are the image plane coordinates; g (x) is the image function; f (x) is the object function; Tint is the imaging integration time; x (t) and y (t) are the vibration functions in the image plane y (t) and y direction, respectively. In this simulation, the inter-frame offset is set to 1 to 3 pixels.
The specific process of target and background fusion is as follows:
(1) Target mapping and quantification: Based on the target radiation characteristics set in the current scene, the calibration coefficient is combined to convert them into DN values, and the target mask is assigned on a1 m resolution grid; the background image is upsampled to the same grid to achieve spatial consistency.
(2) Imaging degradation: To reflect optical aberrations and systematic degradation, the target component is convolved with the equivalent PSF. This paper adopts the Gaussian PSF approximation, and its standard deviation is 𝜎 = 30 pixels ( One pixel at 30 m resolution) , which is consistent with the image quality evaluation results of thermal imagers on track [ 25-26 ].
Fig.3Flowchart of moving target generation.
(3) Simulation of inter-spectral movement: Considering the sampling time interval Δtband=7.3 ms between the thermal imager spectral bands, taking an airplane as an example, the typical flight speed 𝑣 is 220−280 m/s, the object-space displacement between adjacent spectral bands is approximately v⋅Δtband≈1.6-2.0 m. The sub-pixel displacement between spectral bands is also an important feature for judging moving targets and therefore cannot be ignored when simulating three-spectral-band dynamic scenes.
(4) Target area superposition: Downsample the superimposed target area to 30 m resolution, replacing the background area that does not contain the target in the original scene.
The result generated by the above steps is a multi-frame infrared sequence, which not only reflects the continuous evolution of the cloud background, but also presents the motion characteristics of targets such as aircraft and ships in the real background; it maintains degradation and noise characteristics consistent with the imaging mechanism of the system in both spatial and temporal dimensions.
2 Dataset description
The dataset constructed in this paper consists of four typical infrared dynamic scenes, each of which covers an area of 100 km × 100 km. Using the TIS image as the background, simulated moving targets were superimposed to construct a three-spectral thermal infrared image sequence. A single scene consists of 600 frames (lasting120 seconds) at a5 Hz frame rate, enabling the characterization of the target's continuous motion over time. The four scenes contain a total of 55 aircraft and ship targets, covering six typical surface types: clouds, oceans, cities, deserts, mountains, and grasslands, demonstrating both spatial diversity and background complexity. Figure4 shows the target positions and motion trajectories for the four scenes.
In terms of target types, the aircraft range includes five common civil aircraft types: the B737, B747, B777, A320, and A350. The B737 and A320 are medium-sized passenger aircraft with wingspans of approximately 30 meters, while the B747, B777, and A350 are large wide-body aircraft with wingspans of approximately 60 meters. Ship targets are set at three length scales: 100 meters, 200 meters, and 300 meters, covering the typical size range from medium-sized merchant ships to large transport vessels.
In terms of thermal infrared imaging characteristics, all37 aircraft samples exhibit negative contrast, while the eight ship targets in Scene2 exhibit positive contrast with the sea surface, and the15 ship targets in Scenes 1 and 4 exhibit negative contrast. Target-background contrast directly affects the radiation resolvability of infrared systems and the difficulty of detection and tracking algorithms, making it a key variable for assessing detectability.
Fig4. Target position and motion trajectory in the dataset scene.
In the time dimension, the signal-to-noise ratio (SNR) can be used to evaluate the detection performance of the detection system. For effective target detection, the system needs to have a sufficiently high signal-to-noise ratio to ensure that the target signal can be accurately distinguished from the background. In an image, the signal-to-noise ratio can be expressed as
SNR=DNtarget _max -DNback ¯σnoise
(11)
Where DNtarget_max represents the highest target DN value; DNback ¯ represents the average DN value of the target's neighborhood background; σnoise represents the temporal noise in the detection chain, which is related to the detector's readout noise, dark current, and background response. Based on current process technology and TIS performance analysis, the standard deviation of timing noise is approximately 0.4 % to 0.9 % of the background intensity. In this paper, the noise factor is set to 0.007. In the spatial dimension, the target appears as a local extreme value, and the signal-to-clutter ratio (SCR) is usually used to reflect the detectability of the target. The SCR is defined as the ratio of the target signal to the neighboring clutter signal, which can be expressed as
SCR=DNtarget _ max -DNback ¯σclutter
(12)
Where σclutter represents clutter:
σclutter =1Ni=2N σi2
(13)
Where σi is the root mean square value of the neighborhood background pixel response; N is he number of pixels in the neighborhood, and the side length of the square neighborhood background is approximately twice the minimum scale of the target.
Figure5 shows the three-spectral pseudo-color images of some targets in the dataset, along with inter-frame difference images in the10.3−11.3 μm spectral band (5 adjacent frames for aircraft targets and 30 adjacent frames for ship targets) . SNR and SCR values were calculated.
Cloud background is an important interference factor that affects detectability. Targets located under the clouds are obscured, and the radiation signal has difficulty reaching the detector; aircraft located above the clouds have extremely low target-background contrast due to their own low temperatures and the cloud top temperature, which often leads to missed detection. In cloudy sea areas, cloud obstruction makes the visible trajectory of ships in the video intermittent, significantly increasing the difficulty of temporal correlation and target tracking; at the same time, the temporal movement of the cloud field brings about background changes. In order to reflect the real characteristics of cloudy coastal cities, the dataset selects two cloud cover scenes and simulates cloud motion to evaluate the challenges brought by cloud interference in multi-frame target detection and tracking.
Complex background features are also a significant source of false alarms. Small, cool buildings and green areas in cities at night can easily be confused with aircraft due to their spatial scale and thermal radiation characteristics. Isolated patches of cool vegetation or terrain shadows in desert environments can also exhibit radiation characteristics similar to those of aircraft. High-altitude mountainous areas have relatively low overall background temperatures and low target-background contrast, making detection more difficult. The dataset's scene selection considers these high-false-alarm, difficult-to-detect areas.
The information of the four scenes selected in the dataset is shown in Table1. Among them, scene1 is the Shanghai area in China. The imaging time is at night, covering Hongqiao Airport, with dense flights, and the take-off and landing processes of flights can be observed (the background is mainly composed of cities, clouds and offshore) . As shown in Figure6 (b) , during the take-off process of the target S1-Aircraft09, as the flight altitude increases, the fuselage temperature decreases, the SNR and SCR increase. Scene2 is the eastern part of Taiwan Province, China. The imaging time is during the day. The background mainly includes ocean, grassland and clouds. The interaction between ship and cloud interference is considered, as shown in Figure6 (c) . Scene3 is located near the Qilian Mountains in western China. The imaging time is during the day. It covers complex landforms such as forests, grasslands and snow-capped mountains. The temperature field gradient is significant, as shown in Figure6 (d) . Scene4 is a night scene in the Persian Gulf area in northern Qatar. It is located in an important air and sea shipping channel and can simultaneously characterize the activities of air and sea targets (the background is composed of ocean, desert and city) . As shown in Figure6 (e) , the SCR increases significantly when the aircraft flies from land to sea.
Fig.5Three-band pseudo-color images and single-band difference images of some samples in the dataset (SX represents the scene number, AXX and SXX represent the aircraft and ship, forexample, S4-S04 represents the ship numbered 04 in scene4)
Fig.6SNR and SCR of some samples in the dataset.
Table1Scene information in the dataset
This dataset strikes a balance between temporal continuity, target heterogeneity, and background complexity, providing a reproducible experimental benchmark for tasks such as infrared small target detection, video object tracking, cloud interference robustness assessment, and detectability analysis.
3 Conclusion
This paper constructs a multi-frame, three-spectral thermal infrared dataset covering seven typical surface backgrounds and eight types of sea and air targets (link: https://github.com/XiaoxuanZhou/FSDIR_Dataset) . Using the core process of "high-speed cloud motion estimation − high-resolution background mapping − historical prior-driven target simulation", this dataset generates dynamic scenes containing clouds, ground backgrounds, aircraft, and ships. While simulating target morphology, it preserves multi-spectral radiometric characteristics and sub-pixel motion features, achieving consistency with real-world scenarios in both spatiotemporal scales and imaging mechanisms.
This dataset can serve as basic data support for the development of algorithms such as detection, tracking, and multi-spectral fusion, as well as for the evaluation of system limit detection. It supports the quantification of detectability and comparison of algorithm robustness for targets such as different types of aircraft under different SNR/SCR and complex background conditions. Consequently, it provides reliable, reproducible methods and data for continuous algorithm iteration and engineering implementation.
Fig.1Comparison between simulated targets and actual measured targets.
Fig.2Dynamic scene sequence generation flowchart.
Fig.3Flowchart of moving target generation.
Fig.5Three-band pseudo-color images and single-band difference images of some samples in the dataset (SX represents the scene number, AXX and SXX represent the aircraft and ship, forexample, S4-S04 represents the ship numbered 04 in scene4)
Fig.6SNR and SCR of some samples in the dataset.
Table1Scene information in the dataset
Wang D, Li X, Hao M. Aircraft Target Detection in Remote Sensing Images Based on Improved Faster R-CNN[C]. Dali:2023 IEEE 5th International Conference on Civil Aviation Safety and Information Technology(ICCASIT),2023.
Yuan X, Zheng Z, Li Y,et al. Strip R-CNN: Large Strip Convolution for Remote Sensing Object Detection[J].arXiv:2501.03775,2025.
Zhang Y, Ye M, Zhu G,et al. FFCA-YOLO for Small Object Detection in Remote Sensing Images[J]. IEEE Transactions on Geoscience and Remote Sensing,2024,62:1-15.
Xu Q, Li Y, Shi Z. LMO-YOLO: A Ship Detection Model for Low-Resolution Optical Satellite Imagery[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2022,15:4117-4131.
Li Z, Wang Y, Feng H,et al. Local to Global: A Sparse Transformer-Based Small Object Detector for Remote Sensing Images[J]. IEEE Transactions on Geoscience and Remote Sensing,2025,63:1-16.
Wu H, Huang X, He C,et al. Infrared Small Target Detection with Swin Transformer-Based Multiscale Atrous Spatial Pyramid Pooling Network[J]. IEEE Transactions on Instrumentation and Measurement,2025,74:1-14.
Chen T, Ye Z, Tan Z,et al. MiM-ISTD: Mamba-in-Mamba for Efficient Infrared Small-Target Detection[J]. IEEE Transactions on Geoscience and Remote Sensing,2024,62:1-13.
Zhang Q, Wang W, Liu Y,et al. Selective Structured State Space for Multispectral-fused Small Target Detection[J].arXiv:2505.14043,2025.
Bui L, Phung S L, Di Y,et al. UOW-Vessel: A Benchmark Dataset of High-Resolution Optical Satellite Images for Vessel Detection and Segmentation[C]. Waikoloa:2024 IEEE/CVF Winter Conference on Applications of Computer Vision(WACV),2024.
Song S W, Zhang R, Hu M,et al. Fine-Grained Ship Recognition Based on Visible and Near-Infrared Multimodal Remote Sensing Images: Dataset, Methodology and Evaluation[J]. Computers, Materials and Continua,2024,79(3):5243-5271.
Yao L, Zhang X, Lyu Y,et al. FGSC-23: A large-scale dataset of high-resolutionoptical remote sensing image for deep learning-based fine-grained ship recognition[J]. Journal of Image and Graphics,2024,26(10):2337-2345.
Osswald M, Niederloehner L, Koejer S,et al. FineAir: Finest-grained Airplanes in High-resolution Satellite Images[C]. Tucson:2025 IEEE/CVF Winter Conference on Applications of Computer Vision,2025.
Bakırman T, Sertel E. A benchmark dataset for deep learning-based airplane detection: HRPlanes[J]. International Journal of Engineering and Geosciences,2023,8(3):212-223.
Wu Z Z, Wan S H, Wang X F,et al. A benchmark data set for aircraft type recognition from remote sensing images[J]. Applied Soft Computing,2020,89:106132.
Segundo M P, Pinto A, Minetto R,et al. Measuring Economic Activity from Space: A Case Study Using Flying Airplanes and COVID-19[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2021,14:7213-7224.
Li L, Zhou X, Hu Z,et al. On-orbit monitoring flying aircraft day and night based on SDGSAT-1 thermal infrared dataset[J]. Remote Sensing of Environment,2023,298:113840.
Li L, Zhou X, Zhang W,et al. Thermal sentinel: Low-earth orbit infrared intelligent system for flying civil aircraft safety[J]. Remote Sensing of Environment,2025,328:114826.
Li L, Yu J, Chen F. TISD: A Three Bands Thermal Infrared Dataset for All Day Ship Detection in Spaceborne Imagery[J]. Remote Sensing,2022,14(21):5297.
Westlake S T, Volonakis T N, Jackman J,et al. Deep learning for automatic target recognition with real and synthetic infrared maritime imagery[C]. SPIE,2020,11543:41-53.
Jakubowicz J, Lefebvre S, Maire F,et al. Detecting Aircraft with a Low-Resolution Infrared Sensor[J]. IEEE Transactions on Image Processing,2012,21(6):3034-3041.
Li Z X, Xu Q Y, An W,et al. A lightweight dark object detection network for infrared images[J]. Journal of Infrared and Millimeter Waves,2025,44(2):285-296.
Zhou X, Li L, Yu J,et al. Multimodal aircraft flight altitude inversion from SDGSAT-1 thermal infrared data[J]. Remote Sensing of Environment,2024,308:114178.
Zhang L, Qiao K, Huang S S. Spectrum selection and performance analysis for ship detection[J]. Journal of Infrared and Millimeter Waves,2024,43(2):235-241.
OpenCV.cv:: DualTVL1OpticalFlow Class Reference[EB/OL].https://docs.opencv.org/3.4/dc/d47/classcv_1_1DualTVL1OpticalFlow.html,2025.
Zhou X, Zhang J, Li M,et al. Thermal infrared spectrometer on-orbit defocus assessment based on blind image blur kernel estimation[J]. Infrared Physics & Technology,2023,130:104538.
Qi L, Li L, Ni X,et al. On-Orbit Spatial Quality Evaluation of SDGSAT-1 Thermal Infrared Spectrometer[J]. IEEE Geoscience and Remote Sensing Letters,2022,19:1-5.