基于多尺度和注意力模型的红外与可见光图像融合

Infrared and Visible Image Fusion Based on Multi-scale and Attention Model

  • 摘要: 针对红外与可见光图像在融合后容易出现伪影,小目标轮廓不清晰等问题,提出一种基于多尺度特征与注意力模型相结合的红外与可见光图像融合算法。通过5次下采样提取源图像不同尺度的特征图,再将同一尺度的红外与可见光特征图输入到基于注意力模型的融合层,获得增强的融合特征图。最后把小尺度的融合特征图进行5次上采样,再与上采样后同一尺度的特征图相加,直到与源图像尺度一致,实现对特征图的多尺度融合。实验对比不同融合框架下融合图像的熵、标准差、互信息量、边缘保持度、小波特征互信息、视觉信息保真度以及融合效率,本文方法在多数指标上优于对比算法,且融合图像目标细节明显轮廓清晰。

     

    Abstract: Aiming at the problems that infrared and visible images are prone to artifacts and unclear outlines of small targets after fusion, an infrared and visible images fusion algorithm based on the combination of multi-scale features and attention model is proposed. The feature maps of different scales of the source image are extracted through five times of down-sampling, and then the infrared and visible image feature maps of the same scale are input to the fusion layer based on the attention model to obtain an enhanced fusion feature map. Finally, the small-scale fusion feature map is up-sampled five times, and then added to the feature map of the same scale after up-sampling, until the scale is consistent with the source image, and the multi-scale fusion of the feature map is realized. Experiments compare the entropy, standard deviation, mutual information, edge retention, wavelet feature mutual information, visual information fidelity, and fusion efficiency of fused images under different fusion frameworks. The method in this paper is superior to the comparison algorithm in most indicators, and the target details are obvious and the outline are clear in the fused images.

     

/

返回文章
返回