Infrared and Visible Image Fusion Based on NSCT Combined with Saliency Map and Region Energy
-
摘要: 针对传统的红外与可见光图像融合出现的清晰度和对比度偏低,目标不够突出的问题,本文提出了一种基于Non-subsampled Contourlet(NSCT)变换结合显著图与区域能量的融合方法。首先,使用改进的频率调谐(Frequency-tuned, FT)方法求出红外图像显著图并归一化得到显著图权重,单尺度Retinex(Single-scale Retinex, SSR)处理可见光图像。其次,使用NSCT分解红外与可见光图像,并基于归一化显著图与区域能量设计新的融和权重来指导低频系数融合,解决了区域能量自适应加权容易引入噪声的问题;采用改进的“加权拉普拉斯能量和”指导高频系数融合。最后,通过逆NSCT变换求出融合图像。本文方法与7种经典方法在6组图像中进行对比实验,在信息熵、互信息、平均梯度和标准差指标中最优,在空间频率中第一组图像为次优,其余图像均为最优结果。融合图像信息量丰富、清晰度高、对比度高并且亮度适中易于人眼观察,验证了本文方法的有效性。
-
关键词:
- 图像融合 /
- Non-subsampled Contourlet变换 /
- 区域能量自适应加权 /
- 拉普拉斯能量和
Abstract: To address the problems of low clarity and contrast of indistinct targets in traditional infrared and visible image-fusion algorithms, this study proposes a fusion method based on non-subsampled contourlet transform (NSCT) combined with a saliency map and region energy. First, an improved frequency-tuning (FT) method is used to obtain the infrared image saliency map, which is subsequently normalized to obtain the saliency map weight. A single-scale retinex (SSR) algorithm is then used to enhance the visible image. Second, NSCT is used to decompose the infrared and visible images, and a new fusion weight is designed based on the normalized saliency map and region energy to guide low-frequency coefficient fusion. This solves the problem of region-energy adaptive weighting being prone to introducing noise, and the improved "weighted Laplace energy sum" is used to guide the fusion of high-frequency coefficients. Finally, the fused image is obtained by inverse NSCT. Six groups of images were used to compare the proposed method with seven classical methods. The proposed method outperformed others in terms of information entropy, mutual information, average gradient, and standard deviation. Regarding spatial frequency, the first group of images was second best, and the remaining images exhibited the best results. The fusion images displayed rich information, high resolution, high contrast, and moderate brightness, demonstrating suitability for human observation, which verifies the effectiveness of this method. -
-
表 1 Nato_camp客观评价结果
Table 1 Objective evaluation results of Nato_camp
Methods IE SF MI AG SD LP 6.6022 13.9059 1.8726 5.6680 28.3882 RP 6.5073 14.7712 1.7414 5.8658 27.3435 DTCWT 6.4292 12.8851 1.8237 5.1054 25.6576 CVT 6.3256 11.8559 1.7603 4.7871 24.0154 NSCT 6.6369 13.5828 1.8622 5.6284 28.6006 DCHWT 6.3231 11.0041 1.7605 4.4255 25.8427 Hybrid_MSD 6.7020 15.1560 2.0825 6.0049 29.1284 Proposed 7.0235 14.8216 2.6280 6.2154 37.1931 表 2 Tree评价结果
Table 2 Objective evaluation results of Tree
Methods IE SF MI AG SD LP 5.8484 7.7347 1.6911 3.0154 14.7225 RP 5.8788 9.0936 1.5948 3.2999 15.1486 DTCWT 5.7484 7.1703 1.6836 2.7127 13.5645 CVT 5.7172 6.8121 1.6988 2.5939 13.0307 NSCT 5.8498 7.4456 1.6643 2.9577 14.6721 DCHWT 6.0325 6.1470 1.5695 2.4196 16.3405 Hybrid_MSD 6.2954 8.5220 1.8939 3.2627 20.2859 Proposed 6.8678 11.3701 3.1874 4.8995 31.3029 表 3 Duine客观评价结果
Table 3 Objective evaluation results of Duine
Methods IE SF MI AG SD LP 5.9780 7.4565 1.5175 3.3497 15.4549 RP 5.8923 6.9660 1.5174 3.0050 14.5825 DTCWT 5.8808 6.7566 1.5059 3.0241 14.4632 CVT 5.8322 6.5233 1.4829 2.9103 14.0198 NSCT 5.9806 7.0812 1.5181 3.2809 15.5069 DCHWT 5.8015 5.7144 1.5240 2.5810 13.6918 Hybrid_MSD 5.9472 8.2933 1.5553 3.6047 15.2660 Proposed 7.1126 13.2301 3.1155 6.1093 35.0350 表 4 APC_4客观评价结果
Table 4 Objective evaluation results of APC_4
Methods IE SF MI AG SD LP 5.8574 12.7384 0.9065 5.5624 14.5848 RP 5.6555 12.4937 0.7502 5.0838 12.9853 DTCWT 5.6766 11.7014 0.8259 5.0618 12.8683 CVT 5.5498 11.3298 0.7902 4.8656 11.6973 NSCT 5.8743 12.1900 0.8733 5.4444 14.7461 DCHWT 5.5262 9.7761 0.8671 4.2252 11.4534 Hybrid_MSD 5.9514 14.0328 1.0230 5.9508 15.5852 Proposed 6.6951 16.5562 2.8247 7.5079 25.6268 表 5 Kaptein_1654客观评价结果
Table 5 Objective evaluation results of Kaptein_1654
Methods IE SF MI AG SD LP 6.6557 19.0519 2.2753 6.8009 36.8878 RP 6.7122 19.7509 2.2324 6.9286 34.3867 DTCWT 6.4858 17.9294 2.2036 6.3067 31.2040 CVT 6.3880 16.3414 2.2711 5.6601 28.7014 NSCT 6.6451 18.7667 2.2189 6.7824 35.7430 DCHWT 6.7437 15.4424 2.2980 5.5617 36.3196 Hybrid_MSD 6.8692 20.4999 2.1659 7.1597 37.6306 Proposed 7.0479 20.6661 3.0611 7.7455 53.3902 表 6 Movie_18客观评价结果
Table 6 Objective evaluation results of Movie_18
Methods IE SF MI AG SD LP 5.9545 8.3646 1.7206 3.0967 17.9892 RP 5.7520 9.0084 1.5003 3.1423 16.3032 DTCWT 5.6640 7.8037 1.6294 2.8594 14.5863 CVT 5.5991 7.6734 1.6439 2.8605 13.9172 NSCT 5.8977 8.2244 1.6706 3.0956 17.1809 DCHWT 5.5755 6.6402 1.6596 2.4455 15.0237 Hybrid_MSD 6.2333 9.2791 2.3197 3.3676 21.1429 Proposed 6.8108 10.0577 3.3102 3.7343 47.7950 -
[1] 杨孙运, 奚峥皓, 王汉东, 等. 基于NSCT和最小化-局部平均梯度的图像融合[J]. 红外技术, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6 YANG Sunyun, XI Zhenghao, WANG Handong, et al. Image fusion based on nsct and minimum-local mean gradient[J]. Infrared Technology, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6
[2] 肖中杰. 基于NSCT红外与可见光图像融合算法优化研究[J]. 红外技术, 2017, 39(12): 1127-1130. http://hwjs.nvir.cn/article/id/hwjs201712010 XIAO Zhongjie. Improved infrared and visible light image fusion algorithm based on NSCT[J]. Infrared Technology, 2017, 39(12): 1127-1130. http://hwjs.nvir.cn/article/id/hwjs201712010
[3] ZHANG B, LU X, PEI H, et al. A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform[J]. Infrared Physics & Technology, 2015, 73: 286-297.
[4] ZHOU Z, WANG B, LI S, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26. DOI: 10.1016/j.inffus.2015.11.003
[5] MA J, YU W, LIANG P, et al. Fusion GAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
[6] ZHANG Y, ZHANG L, BAI X, et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation[J]. Infrared Physics & Technology, 2017, 83: 227-237.
[7] Mertens T, Kautz J, Reeth F V. Exposure fusion[C]//Proceedings of Pacific Conference on Computer Graphics and Applications, 2007: 382-390.
[8] ZHANG Z, Blum R S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application[C]//Proc. IEEE, 1999, 87(8): 1315-1326.
[9] Lewis J J, Callaghan R J O, Nikolov S G, et al. Pixel- and region-based image fusion with complex wavelets[J]. Inf. Fus., 2007, 8(2): 119-130. DOI: 10.1016/j.inffus.2005.09.006
[10] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Inf. Fus., 2007, 8(2): 143-156. DOI: 10.1016/j.inffus.2006.02.001
[11] YANG S, WANG M, JIAO L, et al. Image fusion based on a new contourlet packet[J]. Inf. Fus., 2010, 11(2): 78-84. DOI: 10.1016/j.inffus.2009.05.001
[12] Cunha A L, ZHOU J P, DO M N. The nonsubsampled Contourlet transform: theory, design, and applications [J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101. DOI: 10.1109/TIP.2006.877507
[13] 郭明, 符拯, 奚晓梁. 基于局部能量的NSCT域红外与可见光图像融合算法[J]. 红外与激光工程, 2012, 41(8): 2229-2235. GUO Ming, FU Zheng, XI Xiaoliang. Novel fusion algorithm for infrared and visible images based on local energy in NSCT domain[J]. Infrared and Laser Engineering, 2012, 41(8): 2229-2235.
[14] Cands E J, Donoho D L. Curvelets and curvilinear integrals[J]. J. Approximation Theor., 2001, 113(1): 59-90. DOI: 10.1006/jath.2001.3624
[15] DO M N, Vetterli M. Contourlets: a directional multiresolution image representation[C]//Proceedings of IEEE International Conference on Image Processing, 2002, 1: I-357-I-360.
[16] ZHAI Y, SHAH M. Visual attention detection in video se-quencesusing spatiotemporal cues[C]//Proceedings of the 14th ACM Conference on Multimedia, 2006: 815-824.
[17] Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection[C]// IEEE Conference on Computer Vision & Pattern Recognition, 2009: 1597-1604.
[18] 叶坤涛, 李文, 舒蕾蕾, 等. 结合改进显著性检测与NSST的红外与可见光图像融合方法[J]. 红外技术, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805 YE Kuntao, LI Wen, SHU Leilei, LI Sheng. Infrared and visible image fusion method based on improved saliency detection and non-subsampled shearlet transform[J]. Infrared Technology, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805
[19] 唐中剑, 毛春. 基于显著导向的可见光与红外图像融合算法[J]. 太赫兹科学与电子信息学报, 2021, 19(1): 125-131. TANG Zhongjian, MAO Chun. Visible and infrared image fusion algorithm based on saliency guidance[J]. Journal of Terahertz Science and Electronic Information Technology, 2021, 19(1): 125-131.
[20] 王惠琴, 吕佳芸, 张伟. 基于双边滤波-BM3D算法的GPR图像去噪[J]. 兰州理工大学学报, 2022, 48(1): 91-97. WANG Huiqin, LYU Jiayun, ZHANG Wei. GPR image denoising based on bilateral filtering BM3D algorithm[J]. Journal of Lanzhou University of Technology, 2022, 48(1): 91-97.
[21] Edwin H Land. The retinex theory of color vision [J]. Scientific American, 1977, 237(6): 108-129. DOI: 10.1038/scientificamerican1277-108
[22] 程铁栋, 卢晓亮, 易其文, 等. 一种结合单尺度Retinex与引导滤波的红外图像增强方法[J]. 红外技术, 2021, 43(11): 1081-1088. http://hwjs.nvir.cn/article/id/b49a0a09-e295-40e6-9736-24a58971206e CHENG Tiedong, LU Xiaoliang, YI Qiwen, et al. Research on infrared image enhancement method combined with single-scale retinex and guided image filter[J]. Infrared Technology, 2021, 43(11): 1081-1088. http://hwjs.nvir.cn/article/id/b49a0a09-e295-40e6-9736-24a58971206e
[23] 翟海祥, 何嘉奇, 王正家, 等. 改进Retinex与多图像融合算法用于低照度图像增强[J]. 红外技术, 2021, 43(10): 987-993. http://hwjs.nvir.cn/article/id/23500140-4bab-40b3-9bef-282f14dc105e ZHAI Haixiang, HE Jiaqi, WANG Zhengjia, et al. Improved Retinex and multi-image fusion algorithm for low illumination image enhancemen[J]. Infrared Technology, 2021, 43(10): 987-993. http://hwjs.nvir.cn/article/id/23500140-4bab-40b3-9bef-282f14dc105e
[24] LIU Y, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-167.
[25] Kumar B. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform[J]. Signal, Image & Video Processing, 2013, 7: 1125-1143. DOI: 10.1007/s11760-012-0361-x
[26] ZHOU Z, BO W, SUN L, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26.
[27] Toet Alexander. TNO image fusion dataset[EB/OL]. 2014, https://doi.org/10.6084/m9.figshare.1008029.v1.
[28] 张小利, 李雄飞, 李军. 融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报, 2014, 40(2): 306-315. ZHANG Xiaoli, LI Xiongfei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306-315.
-
期刊类型引用(4)
1. 龙虎,张泓筠. 大数据量红外图像非均匀性校正仿真研究. 计算机仿真. 2019(01): 205-208 . 百度学术
2. 刘亚梅. 基于相邻像素统计一致性的非均匀性校正方法. 光子学报. 2018(07): 36-44 . 百度学术
3. 周应宝. 乒乓球运动员发球姿势三维图像校正仿真. 计算机仿真. 2017(11): 253-256+359 . 百度学术
4. 李赓飞,李桂菊,韩广良,刘培勋,江山. 红外成像系统的非均匀性实时校正. 光学精密工程. 2016(11): 2841-2847 . 百度学术
其他类型引用(6)