HUANG Linglin, LI Qiang, LU Jinzheng, HE Xianzhen, PENG Bo. Infrared and Visible Image Fusion Based on Multi-scale and Attention Model[J]. Infrared Technology , 2023, 45(2): 143-149.
Citation: HUANG Linglin, LI Qiang, LU Jinzheng, HE Xianzhen, PENG Bo. Infrared and Visible Image Fusion Based on Multi-scale and Attention Model[J]. Infrared Technology , 2023, 45(2): 143-149.

Infrared and Visible Image Fusion Based on Multi-scale and Attention Model

More Information
  • Received Date: February 24, 2021
  • Revised Date: April 11, 2021
  • Aiming at the problems that infrared and visible images are prone to artifacts and unclear outlines of small targets after fusion, an infrared and visible images fusion algorithm based on the combination of multi-scale features and attention model is proposed. The feature maps of different scales of the source image are extracted through five times of down-sampling, and then the infrared and visible image feature maps of the same scale are input to the fusion layer based on the attention model to obtain an enhanced fusion feature map. Finally, the small-scale fusion feature map is up-sampled five times, and then added to the feature map of the same scale after up-sampling, until the scale is consistent with the source image, and the multi-scale fusion of the feature map is realized. Experiments compare the entropy, standard deviation, mutual information, edge retention, wavelet feature mutual information, visual information fidelity, and fusion efficiency of fused images under different fusion frameworks. The method in this paper is superior to the comparison algorithm in most indicators, and the target details are obvious and the outline are clear in the fused images.
  • [1]
    赵立昌, 张宝辉, 吴杰, 等. 基于灰度能量差异性的红外与可见光图像融合[J]. 红外技术, 2020, 42(8): 775-782. https://www.cnki.com.cn/Article/CJFDTOTAL-HWJS202008012.htm

    ZHAO Lichang, ZHANG Baohui, WU Jie, et al. Fusion of infrared and visible images based on gray energy difference[J]. Infrared Technology, 2020, 42(8): 775-782. https://www.cnki.com.cn/Article/CJFDTOTAL-HWJS202008012.htm
    [2]
    白玉, 侯志强, 刘晓义, 等. 基于可见光图像和红外图像决策级融合的目标检测算法[J]. 空军工程大学学报(自然科学版), 2020, 21(6): 53-59, 100. DOI: 10.3969/j.issn.1009-3516.2020.06.009

    BAI Yu, HOU Zhiqiang, LIU Xiaoyi, et al. An object detection algorithm based on decision-level fusion of visible light image and infrared image[J]. Journal of Air Force Engineering University(Natural Science Editon), 2020, 21(6): 53-59, 100. DOI: 10.3969/j.issn.1009-3516.2020.06.009
    [3]
    董安勇, 杜庆治, 苏斌, 等. 基于卷积神经网络的红外与可见光图像融合[J]. 红外技术, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009

    DONG Anyong, DU Qingzhi, SU Bin, et al. Infrared and visible image fusion based on convolutional neural network[J]. Infrared Technology, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009
    [4]
    陈卓, 方明, 柴旭, 等. 红外与可见光图像融合的U-GAN模型[J]. 西北工业大学学报, 2020, 38(4): 904-912. DOI: 10.3969/j.issn.1000-2758.2020.04.027

    CHEN Zhuo, FANG Ming, CHAI Xu, et al. Infrared and visible image fusion of U-GAN model[J]. Journal of Northwestern Polytechnical University, 2020, 38(4): 904-912. DOI: 10.3969/j.issn.1000-2758.2020.04.027
    [5]
    陈潮起, 孟祥超, 邵枫, 等. 一种基于多尺度低秩分解的红外与可见光图像融合方法[J]. 光学学报, 2020, 40(11): 72-80. https://www.cnki.com.cn/Article/CJFDTOTAL-GXXB202011008.htm

    CHEN Chaoqi, MENG Xiangchao, SHAO Feng, et al. Infrared and visible image fusion method based on multiscale low-rank decomposition [J]. Acta Optica Sinica, 2020, 40(11): 72-80. https://www.cnki.com.cn/Article/CJFDTOTAL-GXXB202011008.htm
    [6]
    林子慧. 基于多尺度变换的红外与可见光图像融合技术研究[D]. 成都: 中国科学院大学(中国科学院光电技术研究所), 2019.

    LIN Zihui. Research on Infrared and Visible Image Fusion Based on Multi-scale Trandform[D]. Chengdu: The Chinese Academy of Sciences(The Institute of Optics and Electronics), 2019.
    [7]
    马旗, 朱斌, 张宏伟. 基于VGG网络的双波段图像融合方法[J]. 激光与红外, 2019, 49(11): 1374-1380. DOI: 10.3969/j.issn.1001-5078.2019.11.018

    MA Qi, ZHU Bin, ZHANG Hongwei. Dual-band image fusion method based on VGGNet[J]. Laser & Infrared, 2019, 49(11): 1374-1380. DOI: 10.3969/j.issn.1001-5078.2019.11.018
    [8]
    LI H, WU X, Durrani T S. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics & Technology, 2019, 102: 103039.
    [9]
    LIN T, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 936-944.
    [10]
    Woo S, Park J, Lee J, et al. CBAM: Convolution-al Block Attention Module[C]//ECCV, 2018: 3-19.
    [11]
    LI H, WU X J, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models [J]. IEEE Transactions on Instrumentation and Measurement, 2020, 12(69): 9645-9656.
    [12]
    杨艳春, 李娇, 王阳萍. 图像融合质量评价方法研究综述[J]. 计算机科学与探索, 2018, 12(7): 1021-1035. https://www.cnki.com.cn/Article/CJFDTOTAL-KXTS201807002.htm

    YANG Yanchun, LI Jiao, WANG Yangping. Review of image fusion quality evaluation methods[J]. Journal of Frontiers of Computer Science and Technology, 2018, 12(7): 1021-1035. https://www.cnki.com.cn/Article/CJFDTOTAL-KXTS201807002.htm
    [13]
    LIN T Y, Maire M, Belongie S, et al. Microsoft coco: common objects in context[C]//ECCV, 2014: 3-5.
    [14]
    Toet A. TNO Image Fusion Dataset. figshare. Dataset[DB/OL]. https://doi.org/10.6084/m9.figshare.1008029.v2, 2014.
    [15]
    LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Trans. Image Process, 2019, 28(5): 2614-2623.
    [16]
    Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsuper-vised approach for exposure fusion with extreme exposure image pairs[C]//2017 IEEE International Conference on Computer Vision (ICCV), 2017: 4724-4732.
    [17]
    MA J, ZHOU Z, WANG B. et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.
  • Related Articles

    [1]QI Yanjie, HOU Qinhe. Infrared and Visible Image Fusion Combining Multi-scale and Convolutional Attention[J]. Infrared Technology , 2024, 46(9): 1060-1069.
    [2]LI Qiuheng, DENG Hao, LIU Guihua, PANG Zhongxiang, TANG Xue, ZHAO Junqin, LU Mengyuan. Infrared and Visible Images Fusion Method Based on Multi-Scale Features and Multi-head Attention[J]. Infrared Technology , 2024, 46(7): 765-774.
    [3]DI Jing, LIANG Chan, REN Li, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Multi-Scale Contrast Enhancement and Cross-Dimensional Interactive Attention Mechanism[J]. Infrared Technology , 2024, 46(7): 754-764.
    [4]CHONG Fating, DONG Zhangyu, YANG Xuezhi, ZENG Qingwang. SAR and Multispectral Image Fusion Based on Dual-channel Multi-scale Feature Extraction and Attention[J]. Infrared Technology , 2024, 46(1): 61-73.
    [5]SHEN Yu, LIANG Li, WANG Hailong, YAN Yuan, LIU Guanghui, SONG Jing. Infrared and Visible Image Fusion Based on N-RGAN Model[J]. Infrared Technology , 2023, 45(9): 897-906.
    [6]HE Le, LI Zhongwei, LUO Cai, REN Peng, SUI Hao. Infrared and Visible Image Fusion Based on Dilated Convolution and Dual Attention Mechanism[J]. Infrared Technology , 2023, 45(7): 732-738.
    [7]WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177.
    [8]WU Yuanyuan, WANG Zhishe, WANG Junyao, SHAO Wenyu, CHEN Yanlin. Infrared and Visible Image Fusion Using Attention- Based Generative Adversarial Networks[J]. Infrared Technology , 2022, 44(2): 170-178.
    [9]LUO Di, WANG Congqing, ZHOU Yongjun. A Visible and Infrared Image Fusion Method based on Generative Adversarial Networks and Attention Mechanism[J]. Infrared Technology , 2021, 43(6): 566-574.
    [10]ZHANG Yao-jun, WU Gui-ling, LI Lei. Fusion for Infrared and Visible Light Images Based on Shearlet Transform and Quantum Theory Model[J]. Infrared Technology , 2015, (5): 418-423.

Catalog

    Article views (176) PDF downloads (59) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return