Infrared and Visible Image Fusion Based on SGWT and Multi-Saliency
-
摘要: 由于谱图小波变换(Spectral Graph Wavelet Transform, SGWT)可充分利用图像在图域中的光谱特性,本文结合其对不规则小区域表达的优势,提出了一种基于多显著性的红外与可见光融合算法。首先应用SGWT将源图像分解成一个低频子带和若干个高频子带;对于低频系数,将多个互补的低层特征结合起来,提出了一种适合人眼视觉特征的多显著性融合规则,对于高频系数,充分考虑邻域像素的相关性,提出了一种区域绝对值取大规则;最后,应用了一种加权最小二乘优化(weighted least squares, WLS)方法对谱图小波重构的融合图像进行优化,在突出显著目标的同时尽可能多地保留可见光的背景细节。实验结果表明,与DWT(Discrete Wavelet Transform)、NSCT(Non-down Sampled Contourlet Transform)等7种相关算法相比,在突出红外目标的同时还能保留更多的可见光背景细节,具有较好的视觉效果;同时在方差、熵、Qabf和互信息量4个客观评价上也均占据优势。Abstract: Spectral graph wavelet transform (SGWT) can fully utilize the spectral characteristics of an image in the image domain and has advantages in the expression of small irregular regions. Therefore, this paper proposes an infrared and visible fusion algorithm based on multi saliency. First, SGWT is used to decompose the source image into a low-frequency sub-band and several high-frequency sub-bands. For low-frequency coefficients, a multi saliency fusion rule suitable for human visual features is proposed by combining multiple complementary low-level features. For high-frequency coefficients, a rule for increasing the absolute value of the region is proposed by fully considering the correlation of neighborhood pixels. Finally, a weighted least squares optimization method is applied to optimize the fusion image reconstructed by spectral wavelet reconstruction, which highlights the main target and retains the background details of visible light as much as possible. The experimental results show that, compared with seven related algorithms such as DWT and NSCT, this method can highlight the infrared target and retain more visible background details, resulting in a better visual effect. Moreover, it exhibits advantages in four objective evaluations: variance, entropy, Qabf, and mutual information.
-
Key words:
- image fusion /
- spectral graph wavelet transform /
- multi saliency /
- WLS
-
表 1 flower图像客观评价指标
Table 1. Objective evaluation indicators of flower images
DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT Var 37.3080 36.3593 36.2790 36.5681 37.2982 42.5228 40.6044 42.6666 EN 6.5573 6.5645 6.5636 6.5935 6.5568 6.8158 6.7140 6.8323 Qabf 0.6365 0.6673 0.6390 0.5893 0.6975 0.6861 0.6964 0.6976 MI 2.3616 2.4034 2.3082 2.6099 2.4929 3.0072 3.1261 3.5597 表 2 Lawn图像客观评价指标
Table 2. Objective evaluation indicators of Lawn images
DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT Var 58.4410 57.5139 57.7888 57.1643 57.8236 58.4217 57.9354 63.9020 EN 7.6123 7.5891 7.5840 7.5986 7.5868 7.6568 7.6426 7.7596 Qabf 0.6757 0.6848 0.6779 0.4531 0.7126 0.6338 0.7152 0.6928 MI 3.2909 3.2203 2.9565 3.4018 3.2720 3.1470 4.8323 3.7267 表 3 Street图像客观评价指标
Table 3. Objective evaluation indicators of Street images
DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT Var 31.6360 29.4182 29.0921 32.4384 30.6035 35.7108 36.4667 37.0747 EN 6.4955 6.4084 6.4144 6.5206 6.4520 6.7867 6.8106 6.8876 Qabf 0.6030 0.6046 0.5453 0.6740 0.6479 0.6659 0.6356 0.6587 MI 1.4877 1.3668 1.2417 1.2955 1.4770 2.5260 2.5474 2.5755 表 4 Mountain图像客观评价指标
Table 4. Objective evaluation indicators of Mountain images
DWT DTCWT CVT GFF NSCT NSCT_PC NSCT_SR SGWT Var 28.3593 26.2615 26.8678 25.8712 27.0885 32.7642 31.4361 35.5412 EN 6.6387 6.4844 6.5329 6.4260 6.5542 6.9621 6.8022 7.1060 Qabf 0.4349 0.4582 0.4114 0.4999 0.4942 0.4417 0.4528 0.4341 MI 1.0551 1.0712 1.0196 1.1347 1.0740 1.1678 1.0069 1.1929 -
[1] Goshtasby A, Nikolov S. Image fusion: Advances in the state of the art[J]. Information Fusion, 2007, 8(2): 114-118. doi: 10.1016/j.inffus.2006.04.001 [2] Toet A, Hogervorst M A, Nikolov S G, et al. Towards cognitive image fusion[J]. Information Fusion, 2010, 11(2): 95-113. doi: 10.1016/j.inffus.2009.06.008 [3] Falk H. Prolog to a categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[J]. Proceedings of the IEEE, 1999, 87(8): 1315-1326 doi: 10.1109/5.775414 [4] GAO Y, MA J, Yuille A L. Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples[J]. IEEE Transactions on Image Processing, 2017, 26(5): 2545-2560. doi: 10.1109/TIP.2017.2675341 [5] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102. [6] 杨风暴, 董安冉, 张雷, 等. DWT、NSCT和改进PCA协同组合红外偏振图像融合[J]. 红外技术, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001YANG Fengbao, DONG Anran, ZHANG Lei, et al. Infrared polarization image fusion using the synergistic combination of DWT, NSCT and improved PCA[J]. Infrared Technology, 2017, 39(3): 201-208. http://hwjs.nvir.cn/article/id/hwjs201703001 [7] 董安勇, 杜庆治, 苏斌, 等. 基于卷积神经网络的红外与可见光图像融合[J]. 红外技术, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009DONG Anyong, DU Qingzhi, SU Bin, ZHAO Wenbo, et al. Infrared and visible image fusion based on convolutional neural network[J]. Infrared Technology, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009 [8] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004 [9] David K Hammond, Pierre Vandergheynst, Rémi Gribonval. Wavelets on graphs via spectral graph theory[J]. Applied and Computational Harmonic Analysis, 2011, 30: 129-150. doi: 10.1016/j.acha.2010.04.005 [10] Morrone M C, Ross J, Burr D C, et al. Mach bands are phase dependent[J]. Nature, 1986, 324(6094): 250-253. doi: 10.1038/324250a0 [11] ZHANG L, ZHANG L, MOU X, et al. FSIM: A feature similarity index for image quality assessment[J]. IEEE Transactions on Image Processing, 2011, 20(8): 2378-2386. doi: 10.1109/TIP.2011.2109730 [12] ZHOU Z, LI S, WANG B. Multi-scale weighted gradient-based fusion for multi-focus images[J]. Information Fusion, 2014, 20: 60-72. doi: 10.1016/j.inffus.2013.11.005 [13] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17. [14] LI H, Manjunath B S, Mitra S. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245. doi: 10.1006/gmip.1995.1022 [15] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156. doi: 10.1016/j.inffus.2006.02.001 [16] LI S, KANG X, HU J. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875. doi: 10.1109/TIP.2013.2244222 [17] LI S, YANG B, HU J. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion, 2011, 12(2): 74-84. doi: 10.1016/j.inffus.2010.03.002 [18] YU L, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. doi: 10.1016/j.inffus.2014.09.004 [19] ZHU Z, ZHENG M, QI G, et al. A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain[J]. IEEE Access, 2019, 7: 20811-20824. doi: 10.1109/ACCESS.2019.2898111 [20] 张小利, 李雄飞, 李军. 融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报, 2014, 40(2): 306-315. doi: 10.3724/SP.J.1004.2014.00306ZHANG Xiao-Li, LI Xiong-Fei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306-315. Doi: 10.3724/SP.J.1004.2014.00306