Abstract:Infrared and visible image features are quite different， and there are no ideal fused images supervise neural networks to learn the mapping relationship between the source images and the fused images. Thus， the application of deep learning is limited to the field of image fusion. To solve this problem， a generative adversarial network framework based on attention mechanism and edge loss is proposed， which is applied to the infrared and visible image fusion. Derived from the thoughts of attention mechanism and adversarial training， the fusion problem is regarded as an adversarial game between the source images and the fused images， and combining channel attention and spatial attention mechanism can learn nonlinear relationship between channel domain features and spatial domain features， which enhances the expression of salient target features. At the same time， an edge-based loss function is proposed， which converts the mapping relationship between the source image pixels and the fused image pixels into the mapping relationship between the edges. Experimental results on multiple datasets demonstrate that the proposed method can effectively fuse infrared target and visible texture information， sharpen image edges， and significantly improve image clarity and contrast.