留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于SGWT和多显著性的红外与可见光图像融合

田立凡 杨莘 梁佳明 吴谨

田立凡, 杨莘, 梁佳明, 吴谨. 基于SGWT和多显著性的红外与可见光图像融合[J]. 红外技术, 2022, 44(7): 676-685.
引用本文: 田立凡, 杨莘, 梁佳明, 吴谨. 基于SGWT和多显著性的红外与可见光图像融合[J]. 红外技术, 2022, 44(7): 676-685.
TIAN Lifan, YANG Shen, LIANG Jiaming, WU Jin. Infrared and Visible Image Fusion Based on SGWT and Multi-Saliency[J]. Infrared Technology , 2022, 44(7): 676-685.
Citation: TIAN Lifan, YANG Shen, LIANG Jiaming, WU Jin. Infrared and Visible Image Fusion Based on SGWT and Multi-Saliency[J]. Infrared Technology , 2022, 44(7): 676-685.

基于SGWT和多显著性的红外与可见光图像融合

基金项目: 

中国国家自然科学基金 61702384

武汉科技大学 2017xz008

详细信息
    作者简介:

    田立凡(1999-),男,湖北天门人,工学硕士,图像处理与模式识别

    通讯作者:

    杨莘(1977-),女,湖南娄底人,工学博士,副教授,多媒体通信与信号处理。E-mail: 317987@qq.com

  • 中图分类号: TP751.1

Infrared and Visible Image Fusion Based on SGWT and Multi-Saliency

  • 摘要: 由于谱图小波变换(Spectral Graph Wavelet Transform, SGWT)可充分利用图像在图域中的光谱特性,本文结合其对不规则小区域表达的优势,提出了一种基于多显著性的红外与可见光融合算法。首先应用SGWT将源图像分解成一个低频子带和若干个高频子带;对于低频系数,将多个互补的低层特征结合起来,提出了一种适合人眼视觉特征的多显著性融合规则,对于高频系数,充分考虑邻域像素的相关性,提出了一种区域绝对值取大规则;最后,应用了一种加权最小二乘优化(weighted least squares, WLS)方法对谱图小波重构的融合图像进行优化,在突出显著目标的同时尽可能多地保留可见光的背景细节。实验结果表明,与DWT(Discrete Wavelet Transform)、NSCT(Non-down Sampled Contourlet Transform)等7种相关算法相比,在突出红外目标的同时还能保留更多的可见光背景细节,具有较好的视觉效果;同时在方差、熵、Qabf和互信息量4个客观评价上也均占据优势。
  • 图  1  生成权重图

    Figure  1.  Generate a weight graph

    图  2  本文融合算法框架

    Figure  2.  The proposed fusion algorithm framework

    图  3  Street图像应用谱图小波分解结果

    Figure  3.  Street image applies spectral wavelet decomposition results

    图  4  四组红外与可见光源图像

    Figure  4.  Four sets of infrared and visible source images

    图  5  flower图像不同方法融合结果

    Figure  5.  Flower image fusion results by different methods

    图  6  lawn图像不同方法融合结果

    Figure  6.  Lawn image fusion results by different methods

    图  7  street图像不同方法融合结果

    Figure  7.  Street image fusion results by different methods

    图  8  mountain图像不同方法融合结果

    Figure  8.  Mountain image fusion results by different methods

    表  1  flower图像客观评价指标

    Table  1.   Objective evaluation indicators of flower images

    DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT
    Var 37.3080 36.3593 36.2790 36.5681 37.2982 42.5228 40.6044 42.6666
    EN 6.5573 6.5645 6.5636 6.5935 6.5568 6.8158 6.7140 6.8323
    Qabf 0.6365 0.6673 0.6390 0.5893 0.6975 0.6861 0.6964 0.6976
    MI 2.3616 2.4034 2.3082 2.6099 2.4929 3.0072 3.1261 3.5597
    下载: 导出CSV

    表  2  Lawn图像客观评价指标

    Table  2.   Objective evaluation indicators of Lawn images

    DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT
    Var 58.4410 57.5139 57.7888 57.1643 57.8236 58.4217 57.9354 63.9020
    EN 7.6123 7.5891 7.5840 7.5986 7.5868 7.6568 7.6426 7.7596
    Qabf 0.6757 0.6848 0.6779 0.4531 0.7126 0.6338 0.7152 0.6928
    MI 3.2909 3.2203 2.9565 3.4018 3.2720 3.1470 4.8323 3.7267
    下载: 导出CSV

    表  3  Street图像客观评价指标

    Table  3.   Objective evaluation indicators of Street images

    DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT
    Var 31.6360 29.4182 29.0921 32.4384 30.6035 35.7108 36.4667 37.0747
    EN 6.4955 6.4084 6.4144 6.5206 6.4520 6.7867 6.8106 6.8876
    Qabf 0.6030 0.6046 0.5453 0.6740 0.6479 0.6659 0.6356 0.6587
    MI 1.4877 1.3668 1.2417 1.2955 1.4770 2.5260 2.5474 2.5755
    下载: 导出CSV

    表  4  Mountain图像客观评价指标

    Table  4.   Objective evaluation indicators of Mountain images

    DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT
    Var 28.3593 26.2615 26.8678 25.8712 27.0885 32.7642 31.4361 35.5412
    EN 6.6387 6.4844 6.5329 6.4260 6.5542 6.9621 6.8022 7.1060
    Qabf 0.4349 0.4582 0.4114 0.4999 0.4942 0.4417 0.4528 0.4341
    MI 1.0551 1.0712 1.0196 1.1347 1.0740 1.1678 1.0069 1.1929
    下载: 导出CSV
  • [1] Goshtasby A, Nikolov S. Image fusion: Advances in the state of the art[J]. Information Fusion, 2007, 8(2): 114-118. doi:  10.1016/j.inffus.2006.04.001
    [2] Toet A, Hogervorst M A, Nikolov S G, et al. Towards cognitive image fusion[J]. Information Fusion, 2010, 11(2): 95-113. doi:  10.1016/j.inffus.2009.06.008
    [3] Falk H. Prolog to a categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[J]. Proceedings of the IEEE, 1999, 87(8): 1315-1326 doi:  10.1109/5.775414
    [4] GAO Y, MA J, Yuille A L. Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2545-2560. doi:  10.1109/TIP.2017.2675341
    [5] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102.
    [6] 杨风暴, 董安冉, 张雷, 等. DWT、NSCT和改进PCA协同组合红外偏振图像融合[J]. 红外技术, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001

    YANG Fengbao, DONG Anran, ZHANG Lei, et al. Infrared polarization image fusion using the synergistic combination of DWT, NSCT and improved PCA[J]. Infrared Technology, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001
    [7] 董安勇, 杜庆治, 苏斌, 等. 基于卷积神经网络的红外与可见光图像融合[J]. 红外技术, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009

    DONG Anyong, DU Qingzhi, SU Bin, ZHAO Wenbo, et al. Infrared and visible image fusion based on convolutional neural network[J]. Infrared Technology, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009
    [8] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153-178. doi:  10.1016/j.inffus.2018.02.004
    [9] David K Hammond, Pierre Vandergheynst, Rémi Gribonval. Wavelets on graphs via spectral graph theory[J]. Applied and Computational Harmonic Analysis, 2011, 30: 129-150. doi:  10.1016/j.acha.2010.04.005
    [10] Morrone M C, Ross J, Burr D C, et al. Mach bands are phase dependent[J]. Nature, 1986, 324(6094): 250-253. doi:  10.1038/324250a0
    [11] ZHANG L, ZHANG L, MOU X, et al. FSIM: A feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386. doi:  10.1109/TIP.2011.2109730
    [12] ZHOU Z, LI S, WANG B. Multi-scale weighted gradient-based fusion for multi-focus images[J]. Information Fusion, 2014, 20: 60-72. doi:  10.1016/j.inffus.2013.11.005
    [13] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.
    [14] LI H, Manjunath B S, Mitra S. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245. doi:  10.1006/gmip.1995.1022
    [15] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156. doi:  10.1016/j.inffus.2006.02.001
    [16] LI S, KANG X, HU J. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. doi:  10.1109/TIP.2013.2244222
    [17] LI S, YANG B, HU J. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion, 2011, 12(2): 74-84. doi:  10.1016/j.inffus.2010.03.002
    [18] YU L, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. doi:  10.1016/j.inffus.2014.09.004
    [19] ZHU Z, ZHENG M, QI G, et al. A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain[J]. IEEE Access, 2019, 7: 20811-20824. doi:  10.1109/ACCESS.2019.2898111
    [20] 张小利, 李雄飞, 李军. 融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报, 2014, 40(2): 306-315. doi:  10.3724/SP.J.1004.2014.00306

    ZHANG Xiao-Li, LI Xiong-Fei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306-315. Doi: 10.3724/SP.J.1004.2014.00306
  • 加载中
图(8) / 表(4)
计量
  • 文章访问数:  134
  • HTML全文浏览量:  36
  • PDF下载量:  35
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-08-01
  • 修回日期:  2021-10-21
  • 刊出日期:  2022-07-20

目录

    /

    返回文章
    返回