Infrared and Visible Image Fusion Algorithm Based on Regional Similarity
-
摘要: 针对传统的红外图像与可见光图像融合算法存在局部模糊、背景信息不完整的问题,文章提出了一种新的融合算法。使用边缘检测算子实现图像轮廓的提取,同时还进行基于能量的加权融合处理;使用区域间相似度的方法实现信号域的提取,最后根据过信号强度进行图像的融合。为了验证算法的正确性,文章进行了对比测试,同时还使用标准差、信息熵和平均梯度3个参数进行了定量分析,本文方法和传统的加权平均算法相比标准差最大提高106.3%,测试结果表明,本文提出的融合方法融合效果更好,具有一定的实用价值。Abstract: To address the problems of local blur and incomplete background information in the traditional fusion algorithm of infrared and visible images, a new fusion algorithm is proposed in this paper. The edge detection operator was used to extract the image contour, and weighted fusion based on energy was also executed. The similarity between regions was used to extract the signal domain. Finally, image fusion is performed according to the over-signal strength. To verify the correctness of the algorithm, a comparative test was conducted and a quantitative analysis was performed using three parameters: standard deviation, information entropy, and average gradient. Compared with the traditional weighted average algorithm, the standard deviation of this method was up to 106.3 %. The test results confirmed that the fusion method proposed in this study has a better fusion effect and practical value.
-
Key words:
- infrared image /
- fusion algorithm /
- edge detection /
- similarity
-
表 1 第一组图像融合数据
Table 1. The first group of image fusion data
Parameters Fusion algorithm Wavelet fusion algorithm Weighted average algorithm IHS transform algorithm Algorithm in this
paperStandard deviation 33.4522 24.9562 25.8361 40.2675 Information entropy 6.7911 6.3652 6.4102 7.5621 Average gradient 5.3961 5.3241 5.4103 6.1258 表 2 第二组图像融合数据
Table 2. The second group of image fusion data
Parameters Fusion algorithm Wavelet fusion algorithm Weighted average algorithm IHS transform algorithm Algorithm in this
paperStandard deviation 48.2561 29.6521 30.2478 61.1598 Information entropy 7.2014 6.4567 6.5125 7.6512 Average gradient 7.4851 6.9654 6.8512 7.8623 -
[1] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. doi: 10.1016/j.inffus.2018.02.004 [2] LIU Y, CHEN X, Ward R K, et al. Medical image fusion via convolutional sparsity based morphological component analysis[J]. IEEE Signal Processing Letters, 2019, 26(3): 485-489. doi: 10.1109/LSP.2019.2895749 [3] 蔡铠利, 石振刚. 红外图像与可见光图像融合算法研究[J]. 沈阳理工大学学报, 2016(3): 17-22. doi: 10.3969/j.issn.1003-1251.2016.03.004CAI Kaili, SHI Zhengang. Research on Image Fusion Algorithm of Infrared and Visible Image[J]. Journal of Shenyang Ligong University, 2016(3): 17-22. doi: 10.3969/j.issn.1003-1251.2016.03.004 [4] 郝志成, 吴川, 杨航, 等. 基于双边纹理滤波的图像细节增强方法[J]. 中国光学, 2016, 9(4): 423-431. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201604005.htmHAO Zhicheng, WU Chuan, YANG Hang, et al. Image detail enhancement method based on multi-scale bilateral texture filter[J]. Chinese Optics, 2016, 9(4): 423-431. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201604005.htm [5] FU Z, WANG X, LI X, et al. Infrared and visible image fusion based on visual saliency and NSCT[J]. Journal of University of Electronic Science & Technology of China, 2017, 46(2): 357-362. [6] DING S, ZHAO X, HUI X, et al. NSCT-PCNN image fusion based on image gradient motivation[J]. IET Computer Vision, 2018, 12(4): 377-383. doi: 10.1049/iet-cvi.2017.0285 [7] KOU F, LI Z, WEN C, et al. Edge-Preserving smoothing pyramid based multi-scale exposure fusion[J]. Journal of Visual Communication & Image Representation, 2018, 53: 235-244. [8] ZHOU Z, BO W, SUN L, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26. doi: 10.1016/j.inffus.2015.11.003 [9] YANG B, LUO J, GUO L, et al. Simultaneous image fusion and demosaicing via compressive sensing[J]. Information Processing Letters, 2016, 116(7): 447-454. doi: 10.1016/j.ipl.2016.03.001 [10] ZHANG Y, BAI X, WANG T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure[J]. Information Fusion, 2017, 35: 81-101. doi: 10.1016/j.inffus.2016.09.006 [11] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17. [12] YANG Y, QUE Y, HUANG S, et al. Multiple visual features measurement with gradient domain guided filtering for multisensor image fusion[J]. IEEE Transactions on Instrumentation & Measurement, 2017, 66(4): 691-703. [13] ZHANG L, ZENG G, WEI J. Adaptive region-segmentation multi-focus image fusion based on differential evolution[J]. International Journal of Pattern Recognition & Artificial Intelligence, 2018, 33(3): 32. [14] YAN X, QIN HL, LI J, et al. Infrared and visible image fusion using multiscale directional nonlocal means filter[J]. Applied Optics, 2015, 54(13): 4299-4308. doi: 10.1364/AO.54.004299 [15] CUI G M, FENG H J, XU Z H, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. doi: 10.1016/j.optcom.2014.12.032