留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

红外与可见光图像多特征自适应融合方法

王君尧 王志社 武圆圆 陈彦林 邵文禹

王君尧, 王志社, 武圆圆, 陈彦林, 邵文禹. 红外与可见光图像多特征自适应融合方法[J]. 红外技术, 2022, 44(6): 571-579.
引用本文: 王君尧, 王志社, 武圆圆, 陈彦林, 邵文禹. 红外与可见光图像多特征自适应融合方法[J]. 红外技术, 2022, 44(6): 571-579.
WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology , 2022, 44(6): 571-579.
Citation: WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology , 2022, 44(6): 571-579.

红外与可见光图像多特征自适应融合方法

基金项目: 

山西省基础研究计划资助项目 201901D111260

信息探测与处理山西省重点实验室开放基金 ISPT2020-4

详细信息
    作者简介:

    王君尧(1997-)女,硕士研究生,研究方向为图像融合,深度学习。E-mail:jywang0119@163.com

    通讯作者:

    王志社(1982-)男,副教授,博士,研究方向为智能信息处理,机器视觉和机器学习。E-mail:wangzs@tyust.edu.cn

  • 中图分类号: TP391.4

Multi-Feature Adaptive Fusion Method for Infrared and Visible Images

  • 摘要: 由于成像机理不同,红外图像以像素分布表征典型目标,而可见光图像以边缘和梯度描述纹理细节,现有的融合方法不能依据源图像特征自适应变化,造成融合结果不能同时保留红外目标特征与可见光纹理细节。为此,本文提出红外与可见光图像多特征自适应融合方法。首先,构建了多尺度密集连接网络,可以有效聚合所有不同尺度不同层级的中间特征,利于增强特征提取和特征重构能力。其次,设计了多特征自适应损失函数,采用VGG-16网络提取源图像的多尺度特征,以像素强度和梯度为测量准则,以特征保留度计算特征权重系数。多特征自适应损失函数监督网络训练,可以均衡提取源图像各自的特征信息,从而获得更优的融合效果。公开数据集的实验结果表明,该方法在主、客观评价方面均优于其他典型方法。
  • 图  1  多特征自适应融合方法原理

    Figure  1.  The principle of multi-feature adaptive fusion method

    图  2  密集连接的定性比较结果

    Figure  2.  Qualitative comparison results of dense skip connections

    图  3  多特征自适应模块的定性比较结果

    Figure  3.  Qualitative comparison results of multi-feature adaptive modules

    图  4  TNO数据集的定性比较结果

    Figure  4.  Qualitative comparison results of TNO datasets

    图  5  Roadscene数据集的定性比较结果

    Figure  5.  Qualitative comparison results of Roadscene datasets

    图  6  OTCBVS数据集的定性比较结果

    Figure  6.  Qualitative comparison results of OTCBVS datasets

    表  1  密集连接的定量比较结果

    Table  1.   Quantitative comparison results of dense skip connection

    Metrics No_dense Only_En Only_De Ours
    EN 6.90676 7.00322 6.98832 7.00507
    NCIE 0.80460 0.80443 0.80519 0.80464
    SD 35.68939 38.39500 43.99837 41.44601
    SCD 1.79833 1.84037 1.78040 1.84590
    VIFF 0.56981 0.60081 0.58553 0.62035
    MS_SSIM 0.92346 0.92305 0.91745 0.93027
    注:将最优值以黑体加粗标注,次优值以下划线标注。客观评价指标越高,表明融合性能越好。
    Note: The best average value is marked in bold, and the second value is marked with an underline. The objective evaluation index is positively correlated with the fusion performance.
    下载: 导出CSV

    表  2  多特征自适应损失函数的定量比较结果

    Table  2.   Quantitative comparison results of multi-feature adaptive loss function

    Metrics Fmean Finten Fgrad Ours
    EN 6.82654 6.95075 6.92680 7.00507
    NCIE 0.80432 0.80460 0.80455 0.80464
    SD 33.71341 38.96415 36.28725 41.44601
    SCD 1.77467 1.83212 1.82509 1.84590
    VIFF 0.55015 0.59584 0.59834 0.62035
    MS_SSIM 0.91588 0.92522 0.92957 0.93027
    下载: 导出CSV

    表  3  TNO数据集的定量比较结果

    Table  3.   Quantitative comparison results of TNO datasets

    Metrics MDLatLRR DenseFuse IFCNN UNFusion GANMcC U2Fusion RFN-Nest Ours
    EN 6.29523 6.25275 6.33795 6.98828 6.57763 6.84306 6.89803 7.00507
    NCIE 0.80435 0.80451 0.80404 0.80912 0.80452 0.80392 0.80428 0.80464
    SD 23.70282 22.85769 24.06712 40.93903 29.92973 33.59608 34.85373 41.44601
    SCD 1.59002 1.57018 1.59052 1.70351 1.67191 1.76712 1.78875 1.84590
    VIFF 0.30902 0.28295 0.34396 0.45033 0.40468 0.62469 0.53121 0.62035
    MS_SSIM 0.90133 0.87696 0.90474 0.87404 0.85915 0.87283 0.91217 0.93027
    下载: 导出CSV

    表  4  Roadscene数据集的定量比较结果

    Table  4.   Quantitative comparison results of Roadscene datasets

    Metrics MDLatLRR DenseFuse IFCNN UNFusion GANMcC U2Fusion RFN-Nest Ours
    EN 6.86973 6.82473 6.91635 7.37792 7.23683 7.41910 7.33677 7.47617
    NCIE 0.80701 0.80718 0.80629 0.80998 0.80670 0.80771 0.80649 0.80658
    SD 33.05830 32.22811 33.64622 50.26212 43.81169 49.15315 46.03340 51.43262
    SCD 1.38030 1.36329 1.38949 1.63715 1.60983 1.76111 1.72851 1.84427
    VIFF 0.39051 0.35676 0.41529 0.48462 0.45465 0.58028 0.47894 0.61286
    MS_SSIM 0.88881 0.85857 0.89583 0.85420 0.85744 0.93322 0.87277 0.92484
    下载: 导出CSV

    表  5  OTCBVS数据集的定量比较结果

    Table  5.   Quantitative comparison results of OTCBVS datasets

    Metrics MDLatLRR DenseFuse IFCNN UNFusion GANMcC U2Fusion RFN-Nest Ours
    EN 7.12984 7.07264 7.23707 7.28585 6.60437 7.35554 7.19973 7.57158
    NCIE 0.80557 0.80569 0.80505 0.81011 0.80560 0.80501 0.80528 0.80578
    SD 36.43416 35.31958 38.72566 44.91661 27.50886 43.12793 39.04909 50.45001
    SCD 1.44604 1.41625 1.45256 1.42248 1.09337 1.62165 1.48793 1.75565
    VIFF 0.27563 0.27216 0.29275 0.22372 0.16750 0.32515 0.26596 0.35130
    MS_SSIM 0.83939 0.81697 0.86125 0.79352 0.67759 0.87722 0.80530 0.88162
    下载: 导出CSV
  • [1] Paramanandham N, Rajendiran K. Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm[J]. Multimedia Tools and Applications, 2018, 77(10): 12405-12436. doi:  10.1007/s11042-017-4895-3
    [2] ZHANG Xingchen, YE Ping, QIAO Dan, et al. Object fusion tracking based on visible and infrared images: a comprehensive review[J]. Information Fusion, 2020, 63: 166-187. doi:  10.1016/j.inffus.2020.05.002
    [3] TU Zhengzheng, LI Zhun, LI Chenglong, et al. Multi-interactive dual- decoder for RGB-thermal salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 5678-5691. doi:  10.1109/TIP.2021.3087412
    [4] FENG Zhanxiang, LAI Jianhuang, XIE Xiaohua. Learning modality- specific representations for visible-infrared person re-identification[J]. IEEE Transactions on Image Processing, 2020, 29: 579-590. doi:  10.1109/TIP.2019.2928126
    [5] MO Yang, KANG Xudong, DUAN Puhong, et al. Attribute filter based infrared and visible image fusion[J]. Informantion Fusion, 2021, 75: 41-54. doi:  10.1016/j.inffus.2021.04.005
    [6] LI Hui, WU Xiaojun, Kittle J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746. doi:  10.1109/TIP.2020.2975984
    [7] 李辰阳, 丁坤, 翁帅, 等. 基于改进谱残差显著性图的红外与可见光图像融合[J]. 红外技术, 2020, 42(11): 1042-1047. http://hwjs.nvir.cn/article/id/6e57a6fb-ba92-49d9-a000-c00e7a933365

    LI Chenyang, DING Kun, WENG Shuai, et al. Image fusion of infrared and visible images based on residual significance[J]. Infrared Technology, 2020, 42(11): 1042-1047. http://hwjs.nvir.cn/article/id/6e57a6fb-ba92-49d9-a000-c00e7a933365
    [8] WANG Zhishe, YANG Fengbao, PENG Zhihao, et al. Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation[J]. Optik-International Journal for Light and Electron Optics, 2015, 126(23): 4184-4190. doi:  10.1016/j.ijleo.2015.08.118
    [9] LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Informantion Fusion, 2017, 36: 191-207. doi:  10.1016/j.inffus.2016.12.001
    [10] WANG Zhishe; WU Yuanyuan; WANG Junyao, et al. Res2Fusion: infrared and visible image fusion based on dense Res2net and double non-local attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12.
    [11] MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. doi:  10.1016/j.inffus.2018.02.004
    [12] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C]//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234-241.
    [13] Toet A. Computational versus psychophysical bottom-up image saliency: a comparative evaluation study[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011, 33(11): 2131-2146.
    [14] LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. doi:  10.1109/TIP.2018.2887342
    [15] ZHANG Yu, LIU Yu, SUN Peng, et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. doi:  10.1016/j.inffus.2019.07.011
    [16] WANG Zhishe, WANG Junyao, WU Yuanyuan, et al. UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(6): 3360- 3374. doi:  10.1109/TCSVT.2021.3109895
    [17] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. doi:  10.1016/j.inffus.2018.09.004
    [18] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.
    [19] LI Hui, WU Xiaojun, Josef Kittler. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. doi:  10.1016/j.inffus.2021.02.023
    [20] TOET A. TNO Image Fusion Datase[DB/OL]. [2014-04-26]. https://figshare.com/articles/TN Image Fusion Dataset/1008029.
    [21] XU Han. Roadscene Database[DB/OL]. [2020-08-07]. https://github.com/hanna-xu/RoadScene.
    [22] Ariffin S. OTCBVS Database[DB/OL]. [2007-06]. http://vcipl-okstate.org/pbvs/bench/.
    [23] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. doi:  10.1109/TPAMI.2020.3012548
    [24] Aslantas V, Bendes E. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008(2): 1-28.
    [25] LIU Zheng, Blasch E, XUE Zhiyun, et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34: 94-109.
    [26] RAO Yunjiang. In-fibre bragg grating sensors[J]. Measurement Science and Technology, 1997(8): 355-375.
    [27] Aslantas V, Bendes E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. AEU-Int. J. Electron. C. , 2015, 69: 1890-1896. doi:  10.1016/j.aeue.2015.09.004
    [28] HAN Yu, CAI Yunze, CAO Yin, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013(14): 127-135.
    [29] MA Kede, ZENG Kai, WANG Zhou. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans. Image Process, 2015, 24: 3345-3356. doi:  10.1109/TIP.2015.2442920
  • 加载中
图(6) / 表(5)
计量
  • 文章访问数:  220
  • HTML全文浏览量:  74
  • PDF下载量:  61
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-02-22
  • 修回日期:  2022-03-14
  • 刊出日期:  2022-06-20

目录

    /

    返回文章
    返回