WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177.
Citation: WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177.

Infrared and Visible Image Fusion Based on Self-attention Learning

More Information
  • Received Date: March 05, 2021
  • Revised Date: August 21, 2021
  • Due to the lack of image saliency preserving in the existing fusion rules, a self-attention-guided infrared and visible light image fusion method is proposed. First, the feature map and self-attention map of the source images are learnt by the self-attention learning mechanism in the feature learning layer. Next, the self-attention map which can capture the long-distance dependent characteristics of the image is used to design average weighted fusion strategy. Finally, the fused feature maps are reconstructed to obtain the fused image, and the learning of image feature coding, self-attention mechanism, fusion rule, and fused feature decoding are realized by generative adversarial network. Experiments on TNO real-world data show that the learned self-attention unit can represent the salient region and benefit the fusion rule design, the proposed algorithm is better than SOAT infrared and visible image fusion algorithms in objective and subjective evaluation, and it retains the detailed information of visible images and infrared target information of infrared images.
  • [1]
    MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
    [2]
    YU X C, GAO G Y, XU J D, et al. Remote sensing image fusion based on sparse representation[C]//2014 IEEE Geoscience and Remote Sensing Symposium, 2014: 2858-2861.
    [3]
    ZHAO W D, LU H C. Medical image fusion and denoising with alternating sequential filter and adaptive fractional order total variation[J]. IEEE Transactions on Instrumentation and Measurement, 2017, 66(9): 2283-2294. DOI: 10.1109/TIM.2017.2700198
    [4]
    LI Y S, TAO C, et al. Unsupervised multilayer feature learning for satellite image scene classification[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(2): 157-161. DOI: 10.1109/LGRS.2015.2503142
    [5]
    JIN X, JIANG Q, et al. A survey of infrared and visual image fusion methods[J]. Information Fusion, 2017, 85: 478-501.
    [6]
    BAI X, ZHANG Y, ZHOU F, et al. Quadtree-based multi-focus image fusion using a weighted focus-measure[J]. Information Fusion, 2015, 22: 105-118. DOI: 10.1016/j.inffus.2014.05.003
    [7]
    BAI X Z. Infrared and visual image fusion through feature extraction by morphological sequential toggle operator[J]. Information Fusion, 2015, 71: 77-86.
    [8]
    LIU Y, CHEN X, PENG H, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191-207. DOI: 10.1016/j.inffus.2016.12.001
    [9]
    MA J Y, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
    [10]
    LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. DOI: 10.1109/TIP.2018.2887342
    [11]
    WANG X, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]//Computer Vision and Pattern Recognition, 2018: 7794-7803.
    [12]
    ZHANG H, GOODFELLOW I, Metaxas D, et al. Self-attention generative adversarial networks[C]//International Conference on Machine Learning, 2020: 7354-7363.
    [13]
    杨晓莉, 蔺素珍. 一种注意力机制的多波段图像特征级融合方法[J]. 西安电子科技大学学报, 2020, 47(1): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD202001018.htm

    YANG X L, LIN S Z. Method for multi-band image feature-level fusion based on attention mechanism[J]. Journal of Xidian University, 2020, 47(1): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD202001018.htm
    [14]
    JIAN L, YANG X, LIU Z, et al. A symmetric encoder-decoder with residual block for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-15.
    [15]
    LOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift [C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning, 2015, 37: 448-456.
    [16]
    ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general image fusion framework based on convolutional neural network [J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011
    [17]
    YAN H, YU X, et al. Single image depth estimation with normal guided scale invariant deep convolutional fields[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 29(1): 80-92.
    [18]
    LIU Y, CHEN X, WARD R, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. DOI: 10.1109/LSP.2016.2618776
    [19]
    MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. DOI: 10.1016/j.inffus.2016.02.001
    [20]
    MA J Y, XU H, JIANG J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion [J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. DOI: 10.1109/TIP.2020.2977573
  • Cited by

    Periodical cited type(4)

    1. 王敷轩,庞珊. 基于多粒度跨模态特征增强的红外与可见光图像融合. 东莞理工学院学报. 2024(03): 32-37 .
    2. 李立,易诗,刘茜,程兴豪,王铖. 基于密集残差生成对抗网络的红外图像去模糊. 红外技术. 2024(06): 663-671 . 本站查看
    3. 杨艳春,雷慧云,杨万轩. 基于快速联合双边滤波器和改进PCNN的红外与可见光图像融合. 红外技术. 2024(08): 892-901 . 本站查看
    4. 陈广秋,温奇璋,尹文卿,段锦,黄丹丹. 用于红外与可见光图像融合的注意力残差密集融合网络. 电子测量与仪器学报. 2023(08): 182-193 .

    Other cited types(4)

Catalog

    Article views (181) PDF downloads (49) Cited by(8)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return