留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于RPCA和LatLRR分解的红外与可见光的图像融合

丁健 高清维 卢一相 孙冬

丁健, 高清维, 卢一相, 孙冬. 基于RPCA和LatLRR分解的红外与可见光的图像融合[J]. 红外技术, 2022, 44(1): 1-8.
引用本文: 丁健, 高清维, 卢一相, 孙冬. 基于RPCA和LatLRR分解的红外与可见光的图像融合[J]. 红外技术, 2022, 44(1): 1-8.
DING Jian, GAO Qingwei, LU Yixiang, SUN Dong. Infrared and Visible Image Fusion Algorithm Based on the Decomposition of Robust Principal Component Analysis and Latent Low Rank Representation[J]. Infrared Technology , 2022, 44(1): 1-8.
Citation: DING Jian, GAO Qingwei, LU Yixiang, SUN Dong. Infrared and Visible Image Fusion Algorithm Based on the Decomposition of Robust Principal Component Analysis and Latent Low Rank Representation[J]. Infrared Technology , 2022, 44(1): 1-8.

基于RPCA和LatLRR分解的红外与可见光的图像融合

基金项目: 

国家自然科学基金项目 61402004

国家自然科学基金项目 61370110

安徽省高等学校自然科学基金项目 KJ2018A0012

详细信息
    作者简介:

    丁健(1997-),男,安徽芜湖人,硕士研究生,研究方向为图像处理。E-mail: 1522523398@qq.com

    通讯作者:

    高清维(1965-),男,安徽合肥人,教授,博导,研究方向为数字图像处理、信号处理、模式识别等。E-mail: qingweigao@ahu.edu.cn

  • 中图分类号: TN391

Infrared and Visible Image Fusion Algorithm Based on the Decomposition of Robust Principal Component Analysis and Latent Low Rank Representation

  • 摘要: 红外光和可见光图像的融合在视频监控、目标跟踪等方面发挥着越来越重要的作用。为了得到融合效果更好的图像,提出了一种新的基于鲁棒性低秩表示的图像分解与深度学习结合的方法。首先,利用鲁棒性主成分分析对训练集图像进行去噪处理,利用快速的潜在低秩表示学习提取突出特征的稀疏矩阵,并对源图像进行分解,重构形成低频图像和高频图像。然后,低频部分利用自适应加权策略进行融合,高频部分利用深度学习的VGG-19网络进行融合。最后,将新的低频图像与新的高频图像进行线性叠加,得到最后的结果。实验验证了本文提出的图像融合算法在主观评价与客观评价上均具有一定的优势。
  • 图  1  图像融合过程

    Figure  1.  The process of image fusion

    图  2  4组红外与可见光源图像

    Figure  2.  Four groups of infrared and visible source images

    图  3  第1组仿真结果

    Figure  3.  The fusion results of image "1"

    图  4  第2组仿真结果

    Figure  4.  The fusion results of image "2"

    图  5  第3组仿真结果

    Figure  5.  The fusion results of image "3"

    图  6  第4组仿真结果

    Figure  6.  The fusion results of image "4"

    图  7  客观评价结果

    Figure  7.  Objective evaluation results

    表  1  稀疏矩阵D的训练过程

    Table  1.   The training of sparse matrix D

    Xtrain
      $\left[ {{\boldsymbol{U}_X}, {\text{diag}}\left\{ {{\sigma _{{x_i}}}} \right\}, {\boldsymbol{V}_X}} \right] = {\text{svd}}\left( {{\boldsymbol{X}_{{\text{train}}}}} \right)$
    $d_i^ * = \min \left\{ {\frac{1}{{2\lambda \sigma _{{X_i}}^2}}, 1} \right\}$
    ${\boldsymbol{D}^ * } = {\boldsymbol{U}_X}{\text{diag}}\left\{ {n_i^ * } \right\}\boldsymbol{U}_X^{\text{T}}$
    利用D*进行图像的分解
    下载: 导出CSV

    表  2  不同融合图像的客观评价结果

    Table  2.   Average objective evaluation results of different fusion image

    Method DWT IFE_VIP CSR CBF Proposed
    FMI 0.9111 0.8863 0.9067 0.8869 0.9164
    SCD 1.7413 1.6031 1.1080 1.4273 1.7991
    MS_SSIM 0.8648 0.7977 0.6997 0.7217 0.9099
    VIF 0.2482 0.2373 0.2110 0.2030 0.3267
    Nabf 0.1497 0.1353 0.0529 0.2241 0.0193
    下载: 导出CSV

    表  3  不同融合方法的计算时间对比

    Table  3.   Computational time comparison of different fusion methods

    Method DWT IFE_VIP CSR CBF Proposed
    Time/s 0.4822 0.1594 87.9350 13.9968 31.0937
    下载: 导出CSV
  • [1] DENG Y, LI C, ZHANG Z, et al. Image fusion method for infrared and visible light images based on SWT and regional gradient[C]//2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), 2017: 976-979, doi: 10.1109/ITOEC.2017.8122499.
    [2] BEN H A, Yun H, Hamid K, et al. A multiscale approach to pixel-level image fusion[J]. Integrated Computer Aided Engineering, 2005, 12(2): 135-146. doi:  10.3233/ICA-2005-12201
    [3] Goshtasby A A, Nikolov S. Image fusion: advances in the state of the art[J]. Information Fusion, 2007, 8(2): 114-118. doi:  10.1016/j.inffus.2006.04.001
    [4] LUO X Q, LI X Y, WANG P F, et al. Infrared and visible image fusion based on NSCT and stacked sparse autoencoders[J]. Multimedia Tools and Applications, 2018, 77(17): 22407-22431. doi:  10.1007/s11042-018-5985-6
    [5] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178.
    [6] YANG J, Wright J, HUANG T S, et al. Image super-resolution via sparse representation[C]//IEEE Transactions on Image Processing, 2010, 19(11): 2861-2873, Doi: 10.1109/TIP.2010.2050625.
    [7] 王文卿, 高钰迪, 刘涵, 等. 基于低秩稀疏表示的红外与可见光图像序列融合方法[J]. 西安理工大学学报, 2019, 35(3): 8. https://www.cnki.com.cn/Article/CJFDTOTAL-XALD201903003.htm

    WANG W Q, GAO Y D, LIU H, et al. Fusion method of infrared and visible image sequences based on low rank sparse representation[J]. Journal of Xi'an University of technology, 2019, 35(3): 8. https://www.cnki.com.cn/Article/CJFDTOTAL-XALD201903003.htm
    [8] 康家银, 陆武, 张文娟. 融合NSST和稀疏表示的PET和MRI图像融合[J]. 小型微型计算机系统, 2019(12): 2506-2511. doi:  10.3969/j.issn.1000-1220.2019.12.006

    KANG J Y, LU W, ZHANG W J. Pet and MRI image fusion based on NSST and sparse representation[J]. Minicomputer System, 2019(12): 2506-2511. doi:  10.3969/j.issn.1000-1220.2019.12.006
    [9] 王建, 吴锡生. 基于改进的稀疏表示和PCNN的图像融合算法研究[J]. 智能系统学报, 2019, 14(5): 7. https://www.cnki.com.cn/Article/CJFDTOTAL-ZNXT201905011.htm

    WANG J, WU X S. Image fusion algorithm based on improved sparse representation and PCNN[J]. Journal of Intelligent Systems, 2019, 14(5): 7. https://www.cnki.com.cn/Article/CJFDTOTAL-ZNXT201905011.htm
    [10] LI H, WU X, Kittler J. MD LatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion[C]//IEEE Transactions on Image Processing, 2020, 29: 4733-4746. Doi: 10.1109/TIP.2020.2975984.
    [11] YU L, XUN C, Ward R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016(99): 1-1.
    [12] Prabhakar K R, Srikar V S, Babu R V. Deep fuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]// IEEE International Conference on Computer Vision, IEEE Computer Society, 2017: 4724-4732.
    [13] LIU Y, CHEN X, CHENG J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): S0219691318500182.
    [14] WANG X Z, YIN J F, ZHANG K, et al. Infrared weak-small targets fusion based on latent low-rank representation and DWT[J]. IEEE Access, 2019, 7: 112681-112692. Doi:  10.1109/ACCESS.2019.2934523.
    [15] LIU G, YAN S. Latent Low-rank representation for subspace segmentation and feature extraction[C]//International Conference on Computer Vision, 2011: 1615-1622. Doi: 10.1109/ICCV.2011.6126422.
    [16] WANG Y M, Morariu V I, Davis L S. Unsupervised feature extraction inspired by latent low-rank representation[C]//IEEE Winter Conference on Applications of Computer Vision, 2015: 542-549. Doi: 10.1109/WACV.2015.78.
    [17] Wright J, MA Y, Mairal J, et al. Sparse representation for computer vision and pattern recognition[J]. Proceedings of the IEEE, 2010, 98(6): 1031-1044. doi:  10.1109/JPROC.2010.2044470
    [18] ZHANG H, LIN Z, ZHANG C, et al. Robust latent low rank representation for subspace clustering[J]. Neurocomputing, 2014, 145(5): 369-373.
    [19] 谢艳新. 基于LatLRR和PCNN的红外与可见光融合算法[J]. 液晶与显示, 2019, 34(4): 100-106. https://www.cnki.com.cn/Article/CJFDTOTAL-YJYS201904014.htm

    XIE Y X. Infrared and visible light fusion algorithm based on latLRR and PCNN[J]. Liquid Crystal and Display, 2019, 34(4): 100-106. https://www.cnki.com.cn/Article/CJFDTOTAL-YJYS201904014.htm
    [20] LI H, WU X J, Kittler J. Infrared and visible image fusion using a deep learning framework[C]//24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710. Doi: 10.1109/ICPR.2018.8546006.
    [21] WANG Z, Simoncelli E P, Bovik A C. Multiscale structural similarity for image quality[C]//The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, 2: 1398-140. Doi: 10.1109/ACSSC.2003.1292216.
    [22] Aslantas V L, Bendes E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. AEU - International Journal of Electronics and Communications, 2015, 69(12): 1890-1896. doi:  10.1016/j.aeue.2015.09.004
    [23] lantas V, Bendes E. A new image quality metric for image fusion: The sum of the correlations of differences[J]. AEUE - International Journal of Electronics and Communications, 2015, 69(12): 1890-1896. doi:  10.1016/j.aeue.2015.09.004
    [24] Haghighat M, Razian M A. Fast-FMI: Non-reference image fusion metric[C]//IEEE International Conference on Application of Information & Communication Technologies, 2014: 1-3.
    [25] LIU Y, CHEN X, Ward R, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. Doi:  10.1109/LSP.2016.2618776.
    [26] Kumar B K S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, Image and Video Processing, 2015, 9(5): 1193-1204. doi:  10.1007/s11760-013-0556-9
    [27] ZHANG Y, ZHANG L, BAI X, et al. Infrared and visual image fusion through infrared feature extraction and visual information preservation[J]. Infrared Physics & Technology, 2017, 83: 227-237.
  • 加载中
图(7) / 表(3)
计量
  • 文章访问数:  175
  • HTML全文浏览量:  86
  • PDF下载量:  60
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-13
  • 修回日期:  2021-03-30
  • 刊出日期:  2022-01-20

目录

    /

    返回文章
    返回