Volume 47 Issue 8
Aug.  2022
Turn off MathJax
Article Contents
SUN Bin, ZHUGE Wuwei, GAO Yunxiang, WANG Zixuan. Infrared and Visible Image Fusion Based on Latent Low-Rank Representation[J]. Infrared Technology , 2022, 44(8): 853-862.
Citation: SUN Bin, ZHUGE Wuwei, GAO Yunxiang, WANG Zixuan. Infrared and Visible Image Fusion Based on Latent Low-Rank Representation[J]. Infrared Technology , 2022, 44(8): 853-862.

Infrared and Visible Image Fusion Based on Latent Low-Rank Representation

  • Received Date: 2021-08-20
  • Rev Recd Date: 2021-09-25
  • Publish Date: 2022-08-20
  • Infrared and visible image fusion is widely used in target tracking, detection, and recognition. To preserve image details and enhance contrast, this study proposed an infrared and visible image fusion method based on latent low-rank representation. The latent low-rank representation was used to decompose the source images into base and significant layers, in which the base layers contained the main content and structure information, and the salient layers contained the local area with relatively concentrated energy. The ratio of low-pass pyramid was also adopted to decompose the base layer into low-frequency and high-frequency layers. The corresponding fusion rules were designed according to the characteristics of the different layers. A sparse representation was used to express the relatively dispersed energy of the low-frequency base, and the rules of the maximum L1 norm and maximum sparse coefficient were weighted averages to retain different significant features. The absolute value of the high-frequency part of the base layer was used to enhance the contrast. Local variance was used for the salient layer to measure significance, and the weighted average was used to highlight the target area with enhanced contrast. Experimental results on the TNO datasets show that the proposed method performed well in both qualitative and quantitative evaluations. The method based on low-rank decomposition can enhance the contrast of the targets and retain rich details in infrared and visible fusion images.
  • loading
  • [1]
    沈英, 黄春红, 黄峰, 等. 红外与可见光融合技术的研究进展[J]. 红外与激光工程, 2021(9): 1-16. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ202109014.htm

    SHEN Y, HUANG C H, HUANG F, et al. Infrared and visible image fusion: review of key technologies [J]. Infrared and Laser Engineering, 2021(9): 1-16. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ202109014.htm
    [2]
    杨孙运, 奚峥皓, 王汉东, 等. 基于NSCT和最小化-局部平均梯度的图像融合[J]. 红外技术, 2021, 43(1): 13-20. https://www.cnki.com.cn/Article/CJFDTOTAL-HWJS202101003.htm

    YANG S Y, XI Z H, WANG H D, et al. Image fusion based on NSCT and minimum-local mean gradient [J]. Infrared Technology, 2021, 43(1): 13-20. https://www.cnki.com.cn/Article/CJFDTOTAL-HWJS202101003.htm
    [3]
    ZHANG X, YE P, XIAO G. VIFB: A Visible and Infrared Image Fusion Benchmark[C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops IEEE, 2020: 468-478.
    [4]
    CHEN J, WU K, CHENG Z, et al, A saliency-based multiscale approach for infrared and visible image fusion[J]. Signal Processing, 2021, 182(4): 107936.
    [5]
    LIU Y, CHEN X, Ward R K, et al. Image fusion with convolutional sparse representation[J]. IEEE signal processing letters, 2016, 23(12): 1882-1886. doi:  10.1109/LSP.2016.2618776
    [6]
    MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. doi:  10.1016/j.inffus.2016.02.001
    [7]
    ZHANG H, XU H, TIAN X, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76(11): 323-336.
    [8]
    Bavirisetti D P, D Huli R. Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics & Technology, 2016, 76: 52-64.
    [9]
    LI H, WU X J. Infrared and visible image fusion using latent low-rank representation[J/OL] [2018-04-24]. arXiv preprint. https://arxiv.org/abs/1804.08992.
    [10]
    LIU G, YAN S. Latent low-rank representation for subspace segmentation and feature extraction[C]//International Conference on Computer Vision, 2011: 1615-1622.
    [11]
    刘琰煜, 周冬明, 聂仁灿, 等. 低秩表示和字典学习的红外与可见光图像融合算法[J]. 云南大学学报: 自然科学版, 2019, 41(4): 689-698. https://www.cnki.com.cn/Article/CJFDTOTAL-YNDZ201904007.htm

    LIU Y Y, ZHOU D M, NIE R C, et al. Infrared and visible image fusion scheme using low rank representation and dictionary learning[J]. Journal of Yunnan University: Natural Sciences Edition, 2019, 41(4): 689-698. https://www.cnki.com.cn/Article/CJFDTOTAL-YNDZ201904007.htm
    [12]
    王凡, 王屹, 刘洋. 利用结构化和一致性约束的稀疏表示模型进行红外和可见光图像融合[J]. 信号处理, 2020, 36(4): 572-583. https://www.cnki.com.cn/Article/CJFDTOTAL-XXCN202004012.htm

    WANG F, WANG Y, LIU Y. Infrared and visible image fusion method based on sparse representation with structured and spatial consistency constraints[J]. Journal of Signal Processing, 2020, 36(4): 572-583. https://www.cnki.com.cn/Article/CJFDTOTAL-XXCN202004012.htm
    [13]
    LIU Y, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015(24): 147-164.
    [14]
    LI S, KANG X, FANG L, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. doi:  10.1016/j.inffus.2016.05.004
    [15]
    Toet Alexander. The TNO Multiband image data collection[J]. Data in Brief, 2017, 15: 249-251. doi:  10.1016/j.dib.2017.09.038
    [16]
    Bavirisetti D P, Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors Journal, 2015, 16(1): 203-209.
    [17]
    CHEN J, LI X J, LUO L B, et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition[J]. Information Sciences, 2020, 508: 64-78. doi:  10.1016/j.ins.2019.08.066
    [18]
    Ba Virisetti D P. Multi-sensor image fusion based on fourth order partial differential equations[C]//20th International Conference on Information Fusion (Fusion) of IEEE, 2017: 1-9.
    [19]
    MA J Y, ZHOU Y. Infrared and visible image fusion via gradientlet filter[J]. Computer Vision and Image Understanding, 2020, 197: 103016.
    [20]
    Bavirisetti D P, D Huli R. Two-scale image fusion of visible and infrared images using saliency detection[J]. Infrared Physics & Technology, 2016, 76: 52-64.
    [21]
    Naidu V. Image fusion technique using multi-resolution singular value decomposition[J]. Defence Science Journal, 2011, 61(5): 479-484. doi:  10.14429/dsj.61.705
    [22]
    刘智嘉, 贾鹏, 夏寅辉. 基于红外与可见光图像融合技术发展与性能评价[J]. 激光与红外, 2019, 49(5): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-JGHW201905022.htm

    LIU Z J, JIA P, XIA Y H, et al. Development and performance evaluation of infrared and visual image fusion technology[J]. Laser & Infrared, 2019, 49(5): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-JGHW201905022.htm
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(9)  / Tables(3)

    Article Metrics

    Article views (205) PDF downloads(50) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return