WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology , 2022, 44(6): 571-579.
Citation: WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology , 2022, 44(6): 571-579.

Multi-Feature Adaptive Fusion Method for Infrared and Visible Images

More Information
  • Received Date: February 21, 2022
  • Revised Date: March 13, 2022
  • Owing to different imaging mechanisms, infrared images represent typical targets by pixel distribution, whereas visible images describe texture details through edges and gradients. Existing fusion methods fail to adaptively change according to the characteristics of the source images, thereby resulting in fusion results that do not retain infrared target features and visible texture details simultaneously. Therefore, a multi-feature adaptive fusion method for infrared and visible images is proposed in this study. First, a multi-scale dense connection network that can effectively reuse all intermediate features of different scales and levels, and further enhance the ability of feature extraction and reconstruction is constructed. Second, a multi-feature adaptive loss function is designed. Using the pixel intensity and gradient as measurement criteria, the multi-scale features of the source image are extracted by the VGG-16 network and the feature weight coefficients are calculated by the degree of information preservation. The multi-feature adaptive loss function can supervise network training and evenly extract the respective feature information of the source image to obtain a better fusion effect. The experimental results of public datasets demonstrate that the proposed method is superior to other typical methods in terms of subjective and objective evaluations.
  • [1]
    Paramanandham N, Rajendiran K. Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm[J]. Multimedia Tools and Applications, 2018, 77(10): 12405-12436. DOI: 10.1007/s11042-017-4895-3
    [2]
    ZHANG Xingchen, YE Ping, QIAO Dan, et al. Object fusion tracking based on visible and infrared images: a comprehensive review[J]. Information Fusion, 2020, 63: 166-187. DOI: 10.1016/j.inffus.2020.05.002
    [3]
    TU Zhengzheng, LI Zhun, LI Chenglong, et al. Multi-interactive dual- decoder for RGB-thermal salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 5678-5691. DOI: 10.1109/TIP.2021.3087412
    [4]
    FENG Zhanxiang, LAI Jianhuang, XIE Xiaohua. Learning modality- specific representations for visible-infrared person re-identification[J]. IEEE Transactions on Image Processing, 2020, 29: 579-590. DOI: 10.1109/TIP.2019.2928126
    [5]
    MO Yang, KANG Xudong, DUAN Puhong, et al. Attribute filter based infrared and visible image fusion[J]. Informantion Fusion, 2021, 75: 41-54. DOI: 10.1016/j.inffus.2021.04.005
    [6]
    LI Hui, WU Xiaojun, Kittle J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746. DOI: 10.1109/TIP.2020.2975984
    [7]
    李辰阳, 丁坤, 翁帅, 等. 基于改进谱残差显著性图的红外与可见光图像融合[J]. 红外技术, 2020, 42(11): 1042-1047. http://hwjs.nvir.cn/article/id/6e57a6fb-ba92-49d9-a000-c00e7a933365

    LI Chenyang, DING Kun, WENG Shuai, et al. Image fusion of infrared and visible images based on residual significance[J]. Infrared Technology, 2020, 42(11): 1042-1047. http://hwjs.nvir.cn/article/id/6e57a6fb-ba92-49d9-a000-c00e7a933365
    [8]
    WANG Zhishe, YANG Fengbao, PENG Zhihao, et al. Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation[J]. Optik-International Journal for Light and Electron Optics, 2015, 126(23): 4184-4190. DOI: 10.1016/j.ijleo.2015.08.118
    [9]
    LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Informantion Fusion, 2017, 36: 191-207. DOI: 10.1016/j.inffus.2016.12.001
    [10]
    WANG Zhishe; WU Yuanyuan; WANG Junyao, et al. Res2Fusion: infrared and visible image fusion based on dense Res2net and double non-local attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12.
    [11]
    MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
    [12]
    Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C]//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234-241.
    [13]
    Toet A. Computational versus psychophysical bottom-up image saliency: a comparative evaluation study[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011, 33(11): 2131-2146.
    [14]
    LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. DOI: 10.1109/TIP.2018.2887342
    [15]
    ZHANG Yu, LIU Yu, SUN Peng, et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011
    [16]
    WANG Zhishe, WANG Junyao, WU Yuanyuan, et al. UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(6): 3360- 3374. DOI: 10.1109/TCSVT.2021.3109895
    [17]
    MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
    [18]
    MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.
    [19]
    LI Hui, WU Xiaojun, Josef Kittler. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. DOI: 10.1016/j.inffus.2021.02.023
    [20]
    TOET A. TNO Image Fusion Datase[DB/OL]. [2014-04-26]. https://figshare.com/articles/TN Image Fusion Dataset/1008029.
    [21]
    XU Han. Roadscene Database[DB/OL]. [2020-08-07]. https://github.com/hanna-xu/RoadScene.
    [22]
    Ariffin S. OTCBVS Database[DB/OL]. [2007-06]. http://vcipl-okstate.org/pbvs/bench/.
    [23]
    XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. DOI: 10.1109/TPAMI.2020.3012548
    [24]
    Aslantas V, Bendes E. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008(2): 1-28.
    [25]
    LIU Zheng, Blasch E, XUE Zhiyun, et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34: 94-109.
    [26]
    RAO Yunjiang. In-fibre bragg grating sensors[J]. Measurement Science and Technology, 1997(8): 355-375.
    [27]
    Aslantas V, Bendes E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. AEU-Int. J. Electron. C. , 2015, 69: 1890-1896. DOI: 10.1016/j.aeue.2015.09.004
    [28]
    HAN Yu, CAI Yunze, CAO Yin, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013(14): 127-135.
    [29]
    MA Kede, ZENG Kai, WANG Zhou. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans. Image Process, 2015, 24: 3345-3356. DOI: 10.1109/TIP.2015.2442920
  • Related Articles

    [1]ZHOU Shikai, LI Zhengqiang, YAO Qian, WU Wenfei, XIE Yanqing, FAN Cheng. A Simplified Radiative Transfer Calculation Scheme for Transmittance in Satellite Thermal Infrared Remote Sensing[J]. Infrared Technology , 2025, 47(3): 272-280.
    [2]TIAN Hao, HU Haifei, CAI Sheng, WANG Jiulong, XU Wei. Detectability Analysis of Low Earth Orbital Infrared Detectors for Near Space Hypersonic Targets[J]. Infrared Technology , 2024, 46(6): 617-624.
    [3]WANG Yi, WANG Hao, WEI Ziyu, WANG Xue. Test of Infrared Radiation Characteristic for Aero-engines Based on Spectral Radiometer[J]. Infrared Technology , 2023, 45(3): 292-297.
    [4]SONG Minmin, WANG Shuang, LYU Tao, YUAN Yujian. A Method for Infrared Dim Small Target Detection in Complex Scenes of Sky and Ground[J]. Infrared Technology , 2018, 40(10): 996-1001.
    [5]ZHAO Zhijun, XU Fangyu, WEI Chaoqun, YANG Kun. Study on Measurement Method for Total Infrared Atmospheric Transmittance[J]. Infrared Technology , 2018, 40(7): 718-722.
    [6]YANG Hui, ZHANG Baohui, SHA Tao, WANG Dongjing, WANG Runyu. Detection of Small Infrared Moving Targets Under Ground-sky Background[J]. Infrared Technology , 2018, 40(5): 462-467.
    [7]KANG Lizhu, ZHAO Jinsong, ZHOU Qian, NI Kai, TANG Han, ZHAO Qiang, TAO Liang. Research on Infrared Signature for Remotely Detection from the Nose of Aircrafts[J]. Infrared Technology , 2017, 39(4): 365-371.
    [8]CHEN Fang-fang, GENG Rui, LYU Yong. Research on the Transmittance Model of Laser Infrared Atmospheric Transmission[J]. Infrared Technology , 2015, (6): 496-501.
    [9]ZHOU Xia, CHEN Qian, QIAN Wei-xian, GU Guo-hua, XU Fu-yuan. Research on the Algorithm of Dim and Small Targets Detection on the Ground[J]. Infrared Technology , 2013, (6): 334-338.
    [10]ZHOU Guo-hui, LIU Xiang-wei, XU ji-wei. A Math Model of Calculate the Atmospheric Transmittance Of Infrared Radiation[J]. Infrared Technology , 2008, 30(6): 331-334. DOI: 10.3969/j.issn.1001-8891.2008.06.006
  • Cited by

    Periodical cited type(2)

    1. 高于山,邓瑛,张菁. 临近空间光学载荷设计关键指标与技术综述. 空天技术. 2023(03): 88-93 .
    2. 马俊,朱猛,王才喜,史文杰. 临近空间光电探测技术与发展展望. 空天技术. 2022(02): 85-96 .

    Other cited types(2)

Catalog

    Article views (248) PDF downloads (72) Cited by(4)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return