多尺度和卷积注意力相结合的红外与可见光图像融合

Infrared and Visible Image Fusion Combining Multi-scale and Convolutional Attention

  • 摘要: 针对红外与可见光图像融合时,单一尺度特征提取不足、红外目标与可见光纹理细节丢失等问题,提出一种多尺度和卷积注意力相结合的红外与可见光图像融合算法。首先,设计多尺度特征提取模块和可变形卷积注意力模块相结合的编码器网络,多感受野提取红外与可见光图像的重要特征信息。然后,采用基于空间和通道双注意力机制的融合策略,进一步融合红外和可见光图像的典型特征。最后,由3层卷积层构成解码器网络,用于重构融合图像。此外,设计基于均方误差、多尺度结构相似度和色彩的混合损失函数约束网络训练,进一步提高融合图像与源图像的相似性。本算法在公开数据集上与7种图像融合算法进行比较,在主观评价和客观评价方面,所提算法相较其它对比算法具有较好的边缘保持性、源图像信息保留度,较高的融合图像质量。

     

    Abstract: A multiscale and convolutional attention-based infrared and visible image fusion algorithm is proposed to address the issues of insufficient single-scale feature extraction and loss of details, such as infrared targets and visible textures, when fusing infrared and visible images. First, an encoder network, combining a multiscale feature extraction module and deformable convolutional attention module, is designed to extract important feature information of infrared and visible images from multiple receptive fields. Subsequently, a fusion strategy based on spatial and channel dual-attention mechanisms is adopted to further fuse the typical features of infrared and visible images. Finally, a decoder network composed of three convolutional layers is used to reconstruct the fused image. Additionally, hybrid loss function constraint network training based on mean squared error, multiscale structure similarity, and color is designed to further improve the similarity between the fused and source images. The results of the experiment are compared with seven image-fusion algorithms using a public dataset. In terms of subjective and objective evaluations, the proposed algorithm exhibits better edge preservation, source image information retention, and higher fusion image quality than other algorithms.

     

/

返回文章
返回