Visual Fidelity Fusion of Infrared and Visible Image Using Edge-Aware Smoothing-Sharpening Filter
-
摘要: 最近,多尺度特征提取被广泛应用于红外与可见光图像融合领域,但是大多数提取过程过于复杂,并且视觉效果不佳。为了提高融合结果的视觉保真度,本文提出一种基于边缘感知平滑锐化滤波器(Edge-Aware Smoothing-Sharpening Filter,EASSF)的多尺度图像融合模型。首先,提出一种基于EASSF的多尺度水平图像分解方法对源图像进行分解,得到水平方向上的多尺度纹理成分和基础成分;其次,采用最大值融合规则(Max-Fusion, MF)融合纹理成分,避免图像细节信息的丢失;然后,通过一种感知融合规则(Perceptual-Fusion, PF)融合基础成分,捕获显著性目标信息;最后,通过整合融合后的多尺度纹理成分和基础成分得到融合图像。实验通过分析感知融合系数,对比融合结果的客观数据得出红外与可见光图像融合在多尺度EASSF下较为合适的取值范围;在该取值范围内,本文提出的融合模型同一些较为经典和流行的融合方法相比,不仅解决了特征信息提取的复杂性,而且通过整合基础成分的显著性光谱信息,有效地保证了融合结果的视觉保真度。
-
关键词:
- 红外与可见光图像融合 /
- 边缘感知平滑滤波器 /
- 视觉保真度 /
- 感知融合
Abstract: Recently, multi-scale feature extraction has been widely used in the field of infrared and visible image fusion; however, most extraction processes are too complex, and the visual effect is not good. To improve the visual fidelity of the fusion result, a multi-scale horizontal image fusion model based on an edge-aware smoothing-sharpening filter (EASSF) is proposed for infrared and visible images. First, to obtain multi-scale texture components and basic components in the horizontal direction, a multi-scale horizontal image decomposition method based on the EASSF is proposed to decompose the source image. Second, the maximum fusion rule is used to merge texture components, which can avoid loss of information detail. Then, to capture salient target information, the basic components are fused via the perceptual-fusion rule. Finally, the fused image is obtained by integrating the fused multi-scale texture components and basic components. By analyzing the perceptual fusion coefficient of PF, the appropriate range of infrared and visible image fusion in the multi-scale EASSF is obtained through the objective data of the fusion results. In this range, compared with several classical and popular fusion methods, the proposed fusion model not only avoids the complexity of feature information extraction, but also effectively ensures the visual fidelity of fusion results by integrating the significant spectral information of basic components. -
-
表 1 不同感知系数下的融合定量数据
Table 1 Fusion of quantitative data under different perception coefficients
EN MI Qabf Nabf SCD MS-SIM VIFF (a) 6.0190 12.0380 0.5339 0.0458 1.7619 0.9088 0.4558 (b) 6.1825 12.3651 0.5638 0.0505 1.7885 0.9301 0.4970 (c) 6.5425 13.0851 0.6187 0.0684 1.8166 0.9613 0.5727 (d) 6.7653 13.5307 0.6243 0.1090 1.7887 0.9667 0.5887 (e) 6.7762 13.5524 0.6175 0.1218 1.7700 0.9649 0.5801 (f) 6.7717 13.5434 0.6099 0.1091 1.7624 0.9643 0.5764 (g) 6.7753 13.5506 0.6039 0.1229 1.7555 0.9633 0.5713 (h) 6.7622 13.5244 0.5691 0.1358 1.7495 0.9628 0.5680 表 2 “Road”的客观评价数据
Table 2 Objective evaluation data of "Road"
EN MI Qabf Nabf SCD MS-SIM VIFF DRTV 6.7132 13.0263 0.3982 0.1437 1.1212 0.9177 0.4249 Bayesian 5.5489 11.0977 0.3295 0.0034 1.4586 0.81 0.2566 VGG 5.9387 11.8774 0.4144 0.0038 1.4889 0.8933 0.3772 Resnet 5.9322 11.8644 0.3823 0.0022 1.4762 0.8889 0.3634 MLEPF 6.2354 12.4707 0.4911 0.1379 1.565 0.8808 0.4712 Ours 6.5425 13.0851 0.6187 0.0684 1.8166 0.9613 0.5727 表 3 “Kaptein_1654”的客观评价数据
Table 3 Objective evaluation data of " Kaptein_1654"
EN MI Qabf Nabf SCD MS-SIM VIFF DRTV 7.2171 14.4343 0.3368 0.1296 1.0802 0.6963 0.1312 Bayesian 6.9898 13.9795 0.5255 0.0016 1.6979 0.8648 0.2578 VGG 6.7667 13.5333 0.3614 0.0006 1.6992 0.8516 0.2842 Resnet 6.7802 13.5604 0.3493 0.0003 1.6941 0.8510 0.2819 MLEPF 7.2391 14.4781 0.4921 0.1874 1.204 0.6755 0.222 Ours 7.2296 14.4592 0.4448 0.0438 1.7434 0.8442 0.2974 表 4 “Kaptein_1123”的客观评价数据
Table 4 Objective evaluation data of " Kaptein_1123"
EN MI Qabf Nabf SCD MS-SIM VIFF DRTV 6.6741 13.3482 0.2392 0.11 1.2149 0.8055 0.296 Bayesian 6.4215 12.843 0.2711 0.0035 1.8185 0.8048 0.1505 VGG 6.4876 12.9753 0.3248 0.000896 1.82 0.8782 0.272 Resnet 6.5069 13.0139 0.3171 0.0007 1.8192 0.8781 0.2679 MLEPF 7.1401 14.2803 0.411 0.1744 1.3662 0.5958 0.1951 Ours 7.1576 14.1151 0.4484 0.0728 1.8204 0.8945 0.3414 -
[1] MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
[2] Singh R, Vatsa M, Noore A. Integrated multilevel image fusion and match score fusion of visible and infrared face image for robust recognition[J]. Pattern Recogn. 2008, 41(3): 880-893. DOI: 10.1016/j.patcog.2007.06.022
[3] Reinhard E, Ashikhmin M, Gooch B, et al. Color transfer between images[J]. IEEE Computer. Graph. Appl. , 2002, 21(5): 34-41.
[4] Somone G, Farina A, Morabito F C, et al. Image fusion techniques for remote sensing applications[J]. Inform. Fusion, 2002, 3(1): 3-15. DOI: 10.1016/S1566-2535(01)00056-2
[5] 陈峰, 李敏, 马乐, 等. 基于滚动引导滤波的红外与可见光图像融合算法[J]. 红外技术, 2020, 42(1): 54-61. http://hwjs.nvir.cn/article/id/hwjs202001008 CHEN Feng, LI Min, MA Le, et al. Infrared and visible image fusion algorithm based on the rolling guidance filter[J]. Infrared Technology, 2020, 42(1): 54-61. http://hwjs.nvir.cn/article/id/hwjs202001008
[6] 杨九章, 刘炜剑, 程阳. 基于对比度金字塔与双边滤波的非对称红外与可见光图像融合[J]. 红外技术, 2021, 43(9): 840-844. http://hwjs.nvir.cn/article/id/1c7de46d-f30d-48dc-8841-9e8bf3c91107 YANG Jiuzhang, LIU Weijian, CHENG Yang. A symmetric infrared and visible image fusion based on contrast pyramid and bilateral filtering[J]. Infrared Technology, 2021, 43(9): 840-844. http://hwjs.nvir.cn/article/id/1c7de46d-f30d-48dc-8841-9e8bf3c91107
[7] DU Qinglei, XU Han, MA Yong, et al. Fussing infrared and visible images of different resolutions vis total variation model[J]. Sensors, 2018, 18(11): 3827. DOI: 10.3390/s18113827
[8] ZHAO Z, XU S, ZHANG C, et al. Bayesian fusion for infrared and visible images[J]. Signal Processing, 2020, 177: 165-168
[9] LI H, WU X, Kittler J. Infrared and visible image fusion using a deep learning framework[C]//The 24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710.
[10] LI Hui, WU Xiaojun, Tariq S. Durrani. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics & Technology, 2019, 102: 1030390.
[11] TAN Wei, Thitn William, XIANG Pei, Zhou, Huixin. Multi-modal brain image fusion based on multi-level edge-preserving filtering[J]. Biomedical Signal Processing and Control, 2021, 64(11): 102280.
[12] DENG Guan, Galetto Fernando J, Al-Nasrawi M, et al. A guided edge-aware smoothing-sharpening filter based on patch interpolation model and generalized Gamma distribution[J]. IEEE Open Journal of Signal Processing, 2021, 2: 119-135. DOI: 10.1109/OJSP.2021.3063076
[13] Toet A. TNO Image Fusion Dataset[EB/OL]. [2021-10-01]. http://figshare.com/articles/TNO_Image_Fusion_Dataset/1008029.
[14] Xydeas C S, Petrovic V. Objective image fusion performance measure[J]. Electron. Lett., 2000, 36(4): 308-309. DOI: 10.1049/el:20000267
[15] Aslantas V, Bendes E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. International Journal of Electronics and Communications, 2015, 69(12): 1890-1896. DOI: 10.1016/j.aeue.2015.09.004
[16] LI H, WU X J, Kittler J. Infrared and visible image fusion using a deep learning framework[C]//The 24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710.
[17] YAN Huibin, LI Zhongmin. A general perceptual infrared and visible image fusion framework based on linear filter and side window filtering technology[J]. IEEE Access, 2020, 8: 3029–3041. DOI: 10.1109/ACCESS.2019.2961626