Abstract:
Due to the lack of image saliency preserving in the existing fusion rules, a self-attention-guided infrared and visible light image fusion method is proposed. First, the feature map and self-attention map of the source images are learnt by the self-attention learning mechanism in the feature learning layer. Next, the self-attention map which can capture the long-distance dependent characteristics of the image is used to design average weighted fusion strategy. Finally, the fused feature maps are reconstructed to obtain the fused image, and the learning of image feature coding, self-attention mechanism, fusion rule, and fused feature decoding are realized by generative adversarial network. Experiments on TNO real-world data show that the learned self-attention unit can represent the salient region and benefit the fusion rule design, the proposed algorithm is better than SOAT infrared and visible image fusion algorithms in objective and subjective evaluation, and it retains the detailed information of visible images and infrared target information of infrared images.