Infrared and Visible Image Fusion Based on Guided Filter and Sparse Representation in NSST Domain
-
摘要: 图像融合技术旨在解决单模态图像呈现信息不充分、不全面的问题。本文针对红外和可见光图像的融合,提出了一种新的在非下采样剪切波变换(Non-Subsampled Shearlet Transform, NSST)域下基于引导滤波(Guided Filter, GF)和稀疏表示(Sparse Representation, SR)的融合算法。具体地,①利用NSST对红外与可见光图像分别进行分解,以得到各自的高频子带图像和低频子带图像;②使用GF加权融合策略对高频子带图像进行融合;③使用滚动引导滤波器(Rolling Guidance Filter, RGF)将低频子带图像进一步分解为基础层和细节层:其中基础层采用SR进行融合,细节层利用基于一致性验证的局部最大值策略进行融合;④对融合后的高频子带和低频子带图像进行NSST反变换,从而得到最终的融合结果。在公开数据集上的实验结果表明,相较于其它一些方法,本文方法得到的融合结果的纹理细节信息更丰富、主观视觉效果更好,此外,本文算法所得融合结果的客观评价指标也相对占优。Abstract: Image fusion technology aims to solve the problem of insufficient and incomplete information provided by a single-modality image. This paper proposes a novel method based on guided filter (GF) and sparse representation (SR) in the non-subsampled shearlet transform (NSST) domain, to fuse infrared and visible images. Specifically, ① the infrared and visible images are respectively decomposed using NSST to obtain the corresponding high-frequency and low-frequency sub-band images; ② The GF-weighted fusion strategy is exploited to fuse the high-frequency sub-band images; ③ Rolling guidance filter (RGF) is used to further decompose the low-frequency sub-band images into base and detail layers, whereby the base layers are fused via SR, and the detail layers are fused using local maximum strategy which is based on consistency verification; ④ An inverse NSST is performed on the fused high-frequency and low-frequency sub-band images to obtain the final fusion result. Compared to those of other methods, experimental results on public datasets show that the fusion result obtained by the proposed method has richer texture detail and better subjective visual effects. In addition, the proposed method achieves overall better performance in terms of objective metrics that are commonly used for evaluating fusion results.
-
Keywords:
- image processing /
- image fusion /
- infrared image /
- non-subsampled shearlet transform /
- guided filter
-
-
表 1 不同算法在融合10组图像时取得的评价指标的平均值
Table 1 Average value of evaluation index regarding 10 pairs of images resulted by the different fusion algorithms
Method Metric AG IE SF QAB/F NCIE Qe GTF 2.3751 6.8289 6.7526 0.3520 0.8095 0.3190 ASR 2.2214 6.2858 6.6897 0.3040 0.8047 0.5888 GFF 2.8831 6.7084 7.8868 0.5446 0.8062 0.5556 NSCT-SR 3.0440 6.3341 8.2105 0.4922 0.8046 0.6437 CSR 2.4636 6.2832 6.8114 0.4668 0.8049 0.5900 Our 3.2777 6.9005 8.6467 0.5338 0.8091 0.6598 表 2 尺度参数Rrgf对融合性能的影响
Table 2 Impact of the scale parameter on fusion performance
Value of the parameter Rrgf AG IE SF QAB/F NCIE Qe 4 3.25744 6.85791 8.62015 0.53398 0.81018 0.65698 8 3.26514 6.86893 8.63135 0.53501 0.80989 0.65918 16 3.27777 6.90049 8.64670 0.53375 0.80912 0.65978 32 3.29768 6.95487 8.68145 0.52785 0.80812 0.65647 表 3 不同算法在融合10组图像时取得的评价指标的平均值
Table 3 Average value of evaluation index regarding 10 pairs of images resulted by the different fusion algorithms
Methods Metric AG IE SF QAB/F NCIE Qe Max-absolute 3.2738 6.8993 8.6385 0.5339 0.8090 0.6614 Ours 3.2777 6.9005 8.6467 0.5338 0.8091 0.6598 -
[1] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
[2] LIU Y P, JIN J, WANG Q, et al. Region level based multi-focus image fusion using quaternion wavelet and normalized cut[J]. Signal Processing, 2014, 97: 9-30. DOI: 10.1016/j.sigpro.2013.10.010
[3] Toet A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters, 1989, 9(4): 245-253. DOI: 10.1016/0167-8655(89)90003-2
[4] Choi M, Kim R Y, Nam M R, et al. Fusion of multispectral and panchromatic satellite images using the curvelet transform[J]. IEEE Geoscience and Remote Sensing Letters, 2005, 2(2): 136-140. DOI: 10.1109/LGRS.2005.845313
[5] Easley G, Labate D, Lim W Q. Sparse directional image representations using the discrete shearlet transform[J]. Applied and Computational Harmonic Analysis, 2008, 25(1): 25-46. DOI: 10.1016/j.acha.2007.09.003
[6] 康家银, 陆武, 张文娟. 融合NSST和稀疏表示的PET和MRI图像融合[J]. 小型微型计算机系统, 2019, 40(12): 2506-2511. https://www.cnki.com.cn/Article/CJFDTOTAL-XXWX201912006.htm KANG J Y, LU W, ZHANG W J. Fusion of PET and MRI images using non-subsampled shearlet transform combined with sparse representation[J]. Journal of Chinese Computer Systems. 2019, 40(12): 2506-2511. https://www.cnki.com.cn/Article/CJFDTOTAL-XXWX201912006.htm
[7] LIU Z W, FENG Y, CHEN H, et al. A fusion algorithm for infrared and visible based on guided filtering and phase congruency in NSST domain[J]. Optics and Lasers in Engineering, 2017, 97: 71-77. DOI: 10.1016/j.optlaseng.2017.05.007
[8] 董安勇, 杜庆治, 苏斌, 等. 基于卷积神经网络的红外与可见光图像融合[J]. 红外技术, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009 DONG A Y, DU Q Z, SU B, et al. Infrared and visible image fusion based on convolutional neural network[J]. Infrared Technology, 2020, 42(7): 660-669. http://hwjs.nvir.cn/article/id/hwjs202007009
[9] 叶坤涛, 李文, 舒蕾蕾, 等. 结合改进显著性检测与NSST的红外与可见光图像融合方法[J]. 红外技术, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805 YE K T, LI W, SHU L L, et al. Infrared and visible image fusion method based on improved saliency detection and non-subsampled shearlet transform[J]. Infrared Technology, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805
[10] 王晓娜, 潘晴, 田妮莉. 基于NSST-DWT-ICSAPCNN的多模态图像融合算法[J]. 红外技术, 2022, 44(5): 497-503. http://hwjs.nvir.cn/article/id/0644931d-58ad-4bbd-a752-5f4bbd2061e1 WANG X N, PAN Q, TIAN N L. Multi-modality image fusion algorithm based on NSST-DWT-ICSAPCNN[J]. Infrared Technology, 2022, 44(5): 497-503. http://hwjs.nvir.cn/article/id/0644931d-58ad-4bbd-a752-5f4bbd2061e1
[11] 常莉红. 基于剪切波变换和稀疏表示理论的图像融合方法[J]. 中山大学学报: 自然科学版, 2017, 56(4): 16-19. https://www.cnki.com.cn/Article/CJFDTOTAL-ZSDZ201704003.htm CHANG L H. Fusion method based on shearlet transform and sparse representation[J]. Acta Scientiarum Naturalium Universitatis Sunyatsen, 2017, 56(4): 16-19. https://www.cnki.com.cn/Article/CJFDTOTAL-ZSDZ201704003.htm
[12] 王相海, 邢俊宇, 王鑫莹, 等. 基于剪切波和低秩稀疏表示的噪声图像融合算法研究[J]. 辽宁师范大学学报: 自然科学版, 2022, 45(2): 191-200. https://www.cnki.com.cn/Article/CJFDTOTAL-LNSZ202202008.htm WANG X H, XING J Y, WANG X Y, et al. Noisy image fusion algorithm based on shearlet and low-rank sparse representation[J]. Journal of Liaoning Normal University (Natural Science Edition), 2022, 45(2): 191-200. https://www.cnki.com.cn/Article/CJFDTOTAL-LNSZ202202008.htm
[13] 吴月. 基于非下采样剪切波变换和稀疏表示的图像融合算法研究[D]. 北京: 北京交通大学, 2018. WU Y. Image Fusion Algorithm Based on Sparse Representation and Non-Subsampled Shearlet Transform[D]. Beijing: Beijing Jiaotong University, 2018.
[14] HE K M, SUN J, TANG X O. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(6): 1397-1409. DOI: 10.1109/TPAMI.2012.213
[15] ZHANG Q, SHEN X, XU L, et al. Rolling guidance filter[C]//13th European Conference on Computer Vision, 2014: 815-830.
[16] MA J L, ZHOU Z Q, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.
[17] Aharon M, Elad M, Bruckstein A. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311-4322. DOI: 10.1109/TSP.2006.881199
[18] YANG B, LI S T. Multifocus image fusion and restoration with sparse representation[J]. IEEE Transactions on Instrumentation and Measurement, 2010, 59(4): 884-892. DOI: 10.1109/TIM.2009.2026612
[19] LI H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245.
[20] MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109.
[21] LIU Y, WANG Z F. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET Image Processing, 2015, 9(5): 347-357.
[22] LI S T, KANG X D, HU J W. Image fusion with guided filtering[J]. IEEE Transactions on Image Processing, 2013, 22(7): 2864-2875.
[23] LIU Y, LIU S P, WANG Z F. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164.
[24] LIU Y, CHEN X, Ward R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886.
[25] SHEN Y, NA J, WU Z D, et al. Tetrolet transform images fusion algorithm based on fuzzy operator[J]. Journal of Frontiers of Computer Science and Technology, 2015, 9(9): 1132.
[26] 敬忠良, 肖刚, 李振华. 图像融合—理论与应用[M]. 北京: 高等教育出版社, 2007. JING Z L, XIAO G, LI Z H. Image Fusion: Theory and Applications[M]. Beijing: High Education Press, 2007. (in Chinese)
[27] ZHENG Y, Essock E A, Hansen B C, et al. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms[J]. Information Fusion, 2007, 8(2): 177-192.
[28] Xydeas C S, Petrovic V S. Objective pixel-level image fusion performance measure[C]//AeroSense, 2000: 89-98.
[29] WANG Q, SHEN Y, JIN J. Performance Evaluation of Image Fusion Techniques[M]. Amsterdam: Elsevier, 2008: 469-492.
[30] Piella G, Heijmans H. A new quality metric for image fusion[C]// International Conference on Image Processing, IEEE, 2003(2): Ⅲ-173-6.
-
期刊类型引用(1)
1. 夏爱明,伍雪冬. 基于上下文感知和尺度自适应的实时目标跟踪. 红外技术. 2021(05): 429-436 . 本站查看
其他类型引用(3)