Volume 44 Issue 3
Mar.  2022
Turn off MathJax
Article Contents
QU Haicheng, WANG Yuping, GAO Jiankang, ZHAO Siqi. Mode Adaptive Infrared and Visible Image Fusion[J]. Infrared Technology , 2022, 44(3): 268-276.
Citation: QU Haicheng, WANG Yuping, GAO Jiankang, ZHAO Siqi. Mode Adaptive Infrared and Visible Image Fusion[J]. Infrared Technology , 2022, 44(3): 268-276.

Mode Adaptive Infrared and Visible Image Fusion

  • Received Date: 2021-07-18
  • Rev Recd Date: 2021-09-23
  • Publish Date: 2022-03-20
  • To solve the problems of low contrast and high noise of fused images in low illumination and smoky environments, a mode-adaptive infrared and visible image fusion method (MAFusion) is proposed. Firstly, the infrared and visible images are input into the adaptive weighting module in the generator, and the difference between them is learned through two streams interactive learning. The different contribution proportion of the two modes to the image fusion task in different environments is obtained. Then, according to the characteristics of each modal feature, the corresponding weights of each modal feature are obtained independently, and the fusion feature is obtained by weighted fusion. Finally, to improve the learning efficiency of the model and supplement the multi-scale features of the fused image, a residual block and jump connection combination module are added to the image fusion process to improve the network performance. The fusion quality was evaluated using the TNO and KAIST datasets. The results show that the visual effect of the proposed method is good in subjective evaluation, and the performance indexes of information entropy, mutual information, and noise-based evaluation are better than those of the comparison method.
  • loading
  • [1]
    段辉军, 王志刚, 王彦. 基于改进YOLO网络的双通道显著性目标识别算法[J]. 激光与红外, 2020, 50(11): 1370-1378. doi:  10.3969/j.issn.1001-5078.2020.11.014

    DUAN H J, WANG Z, WANG Y. Two-channel saliency object recognition algorithm based on improved YOLO network[J]. Laser & Infrared, 2020, 50(11): 1370-1378. doi:  10.3969/j.issn.1001-5078.2020.11.014
    [2]
    李舒涵, 宏科, 武治宇. 基于红外与可见光图像融合的交通标志检测[J]. 现代电子技术, 2020, 43(3): 45-49. https://www.cnki.com.cn/Article/CJFDTOTAL-XDDJ202003012.htm

    LI S H, XU H K, WU Z Y. Traffic sign detection based on infrared and visible image fusion[J]. Modern Electronics Technique, 2020, 43(3): 45-49. https://www.cnki.com.cn/Article/CJFDTOTAL-XDDJ202003012.htm
    [3]
    Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images[J]. IEEE Comput. Graph. Appl. , 2001, 21(5): 34-41. http://www.cs.northwestern.edu/~bgooch/PDFs/ColorTransfer.pdf
    [4]
    Kumar P, Mittal A, Kumar P. Fusion of thermal infrared and visible spectrum video for robust surveillance[C]// Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, 2006: 528-539.
    [5]
    常新亚, 丁一帆, 郭梦瑶. 应用整体结构信息分层匹配的红外与可见光遥感图像融合方法[J]. 航天器工程, 2020, 29(1): 100-104. doi:  10.3969/j.issn.1673-8748.2020.01.015

    CHANG X Y, DING Y F, GUO M Y. Infrared and visible image fusion method using hierarchical matching of overall structural information[J]. Spacecraft Engineering, 2020, 29(1): 100-104. doi:  10.3969/j.issn.1673-8748.2020.01.015
    [6]
    Bavirisetti D P, Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform[J]. IEEE Sensors, 2016, 16(1): 203-209. doi:  10.1109/JSEN.2015.2478655
    [7]
    Kumar B S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal Image Video Process, 2015, 9(5): 1193-1204. doi:  10.1007/s11760-013-0556-9
    [8]
    MA J, ZHOU Z, WANG B. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17. https://www.sciencedirect.com/science/article/pii/S1350449516305928
    [9]
    LIU Y, WANG Z. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET Image Process, 2014, 9(5): 347-357. doi:  10.1049/iet-ipr.2014.0311
    [10]
    Burt P, Adelson E. The Laplacian pyramid as a compact image code[J]. IEEE Trans. Commun. , 1983, 31(4): 532-540. doi:  10.1109/TCOM.1983.1095851
    [11]
    马雪亮, 柳慧超. 基于多尺度分析的红外与可见光图像融合研究[J]. 电子测试, 2020, 24(4): 57-58. https://www.cnki.com.cn/Article/CJFDTOTAL-WDZC202024021.htm

    MA Xueliang, LIU Huichao. Research on infrared and visible image fusion based on multiscale analysis[J]. Electronic Test, 2020, 24(4): 57-58. https://www.cnki.com.cn/Article/CJFDTOTAL-WDZC202024021.htm
    [12]
    LI S, YIN H, FANG L. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE Trans Biomed Eng. , 2012, 59(12): 3450-3459. doi:  10.1109/TBME.2012.2217493
    [13]
    Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image[C]//Proc of the 2017 IEEE International Conference on Computer Vision, 2017: 4724-4732.
    [14]
    MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. doi:  10.1016/j.inffus.2018.09.004
    [15]
    LI H, WU X J, Kittler J. Infrared and visible image fusion using a deep learning framework[C]//The 24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710.
    [16]
    LI H, WU X. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. doi:  10.1109/TIP.2018.2887342
    [17]
    董安勇, 杜庆治, 苏斌, 等. 基于卷积神经网络的红外与可见光图像融合[J]. 红外技术, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/59ccddf6-ee9a-43da-983b-35ee872dd707

    DONG Anyong, DU Qingzhi, SU Bin, et al. Infrared and visible image fusion based on convolutional neural network[J]. Infrared Technology, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/59ccddf6-ee9a-43da-983b-35ee872dd707
    [18]
    XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44: 502-518.
    [19]
    Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014: 2672-2680.
    [20]
    HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770-778.
    [21]
    Figshare. TNO Image Fusion Dataset[OL]. [2018-09-15]. https://flgshare.com/articles/TNOImageFusionDataset/1008029.
    [22]
    Hwang S, Park J, Kim N, et al. Multispectral pedestrian detection: benchmark dataset and baseline[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015: 1037-1045.
    [23]
    Roberts J W, Aardt J V, Ahmed F. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Appl. Remote Sens. , 2008, 2(1): 023522-023522-28. doi:  10.1117/1.2945910
    [24]
    QU G, ZHANG D, YAN P. Information measure for performance of image fusion[J]. Electron Lett. , 2002, 38(7): 313-315. doi:  10.1049/el:20020212
    [25]
    Kumar B K S. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform[J]. Signal, Image and Video Processing, 2013, 7(6): 1125-1143. doi:  10.1007/s11760-012-0361-x
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(8)

    Article Metrics

    Article views (202) PDF downloads(51) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return