留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于BEMD改进的视觉显著性红外和可见光图像融合

崔晓荣 沈涛 黄建鲁 王笛

崔晓荣, 沈涛, 黄建鲁, 王笛. 基于BEMD改进的视觉显著性红外和可见光图像融合[J]. 红外技术, 2020, 42(11): 1061-1071.
引用本文: 崔晓荣, 沈涛, 黄建鲁, 王笛. 基于BEMD改进的视觉显著性红外和可见光图像融合[J]. 红外技术, 2020, 42(11): 1061-1071.
CUI Xiaorong, SHEN Tao, HUANG Jianlu, WANG Di. Infrared and Visible Image Fusion Based on BEMD and Improved Visual Saliency[J]. Infrared Technology , 2020, 42(11): 1061-1071.
Citation: CUI Xiaorong, SHEN Tao, HUANG Jianlu, WANG Di. Infrared and Visible Image Fusion Based on BEMD and Improved Visual Saliency[J]. Infrared Technology , 2020, 42(11): 1061-1071.

基于BEMD改进的视觉显著性红外和可见光图像融合

详细信息
    作者简介:

    崔晓荣(1995-),男,硕士研究生,主要从事红外图像处理方面研究。E-mail: cur1601645438@163.com

  • 中图分类号: TP391.41

Infrared and Visible Image Fusion Based on BEMD and Improved Visual Saliency

  • 摘要: 针对视觉显著性融合过程中目标对比度低,图像不够清晰的问题,本文提出一种基于二维经验模态分解(bidimensional empirical mode decomposition,BEMD)改进的Frequency Tuned算法。首先利用BEMD捕获红外图像的强点、轮廓信息用于指导生成红外图像的显著性图,然后将可见光图像和增强后的红外图像进行非下采样轮廓波变换(nonsubsampled contourlet transform,NSCT),对低频部分采用显著性图指导的融合规则,对高频部分采用区域能量取大并设定阈值的融合规则,最后进行逆NSCT变换生成融合图像并进行主观视觉和客观指标评价,结果表明本文方法实现了对原图像多层次、自适应的分析,相较于对比的方法取得了良好的视觉效果。
  • 图  1  原始红外图像

    Figure  1.  The original infrared image

    图  2  不同显著性算法检测结果

    Figure  2.  Detection results of different saliency algorithms

    图  3  灰度因子处理前后对比

    Figure  3.  Comparison before and after gray factor processing

    图  4  NSCT分解结构示意图

    Figure  4.  NSCT decomposition structure diagram

    图  5  基于多尺度变换的红外与可见光图像融合方案

    Figure  5.  Multi-scale transform based infrared and visible image fusion scheme

    图  6  UN camp融合后图像

    Figure  6.  UN camp fusion image

    图  7  Duck融合后图像

    Figure  7.  Duck fusion image

    图  8  Quad融合后图像

    Figure  8.  Quad fusion image

    图  9  Road融合后图像

    Figure  9.  Road fusion image

    图  10  Meting融合后图像

    Figure  10.  Meting fusion image

    表  1  指标1~4客观评价结果

    Table  1.   Objective evaluation results of indicators 1-4

    Fusion methods DWT NSCT NSCT-FT Refs[10] Refs[11] Ours
    AG Camp 5.3124 6.4413 6.3130 6.3847 6.2933 7.0430
    Duck 12.1330 13.009 13.9004 13.9823 15.9637 24.9317
    Quad 2.9931 3.1687 3.2143 3.2211 5.2103 11.2350
    Road 4.1854 6.0509 6.1220 6.1268 6.1220 10.7534
    SF Camp 10.0328 12.3317 12.0787 12.2163 12.0412 13.0310
    Duck 23.7788 25.8573 27.6482 27.824 30.7837 45.5664
    Quck 8.3171 9.2165 9.3217 9.3416 13.3127 25.3463
    Road 8.7663 12.9879 13.0910 13.1050 13.0910 22.0065
    NMI Camp 0.1265 0.1073 0.1737 0.1757 0.1737 0.1346
    Duck 0.2172 0.1913 0.3194 0.3200 0.3197 0.1639
    Quck 0.2524 0.1451 0.1186 0.1187 0.1185 0.1000
    Road 0.1957 0.1453 0.2052 0.2211 0.2052 0.2419
    MSSIM Camp 0.5182 0.5739 0.564 0.5639 0.5638 0.5386
    Duck 0.4098 0.4187 0.4637 0.4660 0.4655 0.3500
    Quck 0.4285 0.4247 0.373 0.3738 0.3726 0.1745
    Road 0.4539 0.5201 0.518 0.5181 0.5180 0.4906
    下载: 导出CSV

    表  2  指标5~7客观评价结果

    Table  2.   Objective evaluation results of indicators 5-7

    Fusion methods DWT NSCT NSCT-FT Refs[10] Refs[11] Ours
    QAB/F Camp 0.3666 0.4466 0.4438 0.444 0.4439 0.4472
    Duck 0.6608 0.7185 0.7412 0.7417 0.7415 0.5835
    Quad 0.4914 0.5191 0.5014 0.5028 0.5005 0.2608
    Road 0.4807 0.6067 0.6107 0.6125 0.6107 0.6146
    IE Camp 6.4526 6.5352 6.9886 7.0117 6.9853 6.5968
    Duck 7.245 7.0741 7.3985 7.4071 7.4045 7.7774
    Quck 6.1265 5.7455 5.5744 5.5742 5.5721 7.4564
    Road 6.689 7.044 7.2423 7.2373 7.2423 7.7609
    CE Camp 1.2799 0.6569 0.5848 0.5448 0.5979 0.6171
    Duck 2.7133 2.9619 2.6941 2.7170 2.7157 2.5487
    Quck 4.6854 5.2318 5.8247 5.8408 5.8195 3.9693
    Road 1.2912 0.7181 0.8772 0.9025 0.8772 0.8917
    下载: 导出CSV

    表  3  图 10客观评价结果

    Table  3.   Objective evaluation results of figure 10

    Fusion methods DWT NSCT NSCT-FT Refs[10] Refs[11] Ours (h) Ours (i)
    AG 4.0168 5.5897 5.6484 5.5434 5.6784 9.8212 9.9087
    SF 8.7041 12.4437 12.6046 12.6607 12.7046 12.7175 12.7999
    NMI 0.1675 0.1056 0.2298 0.1872 0.2098 0.2335 0.2269
    MSSIM 0.4607 0.529 0.5197 0.5189 0.5267 0.4995 0.4888
    QAB/F 0.4044 0.529 0.565 0.5332 0.545 0.5753 0.554
    IE 6.6467 6.6998 7.3041 7.0408 7.4043 7.298 7.2845
    CE 1.1031 0.7952 0.7704 1.1443 0.8704 0.9406 1.0687
    下载: 导出CSV
  • [1] MA C, MIAO Z, ZHANG X P, et al. A saliency prior context model for real-time object tracking[J]. IEEE Transactions on Multimedia, 2017, 19(11): 2415-2424. doi:  10.1109/TMM.2017.2694219
    [2] HU W, YANG Y, ZHANG W, et al. Moving Object Detection Using Tensor Based Low-Rank and Saliently Fused-Sparse Decomposition[J]. IEEE Transactions on Image Processing, 2016, 7149(c): 1-1. http://www.ncbi.nlm.nih.gov/pubmed/27849530
    [3] Da Cunha A L, ZHOU Jianping, Do M N. The nonsubsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101. doi:  10.1109/TIP.2006.877507
    [4] KONG W, ZHANG L, LEI Y. Novel fusion method for visible light and infrared images based on nsst-sf-pcnn[J]. Infrared Physics & Technology , 2014, 65: 103-112. doi:  10.1007/BF03346396
    [5] BHUIYANS M A, ADHAMI R R, KHAN J F. Fast and Adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation[J]. Eurasip Journal on Advances in Signal Processing, 2008(1): 1-18. doi:  10.1155/2008/728356
    [6] A Toet. Computational versus psychophysical bottom-up image saliency: A comparative evaluation study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2011, 33(11): 2131-2146. doi:  10.1109/TPAMI.2011.53
    [7] HAREL J, KOCH C, PERONA P. Graph-based visual saliency[C]// Advances in neural information processing systems, 2006: 545-552.
    [8] HOU X, ZHANG L. Saliency detection: A spectral residual approach[C]// Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007: 1–8.
    [9] ACHANTA R, HEMAMI S, ESTRADA F, et al.Frequency-tuned salient region detection[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: 1597-1604.
    [10] 傅志中, 王雪, 李晓峰, 等.基于视觉显著性和NSCT的红外与可见光图像融合[J].电子科技大学学报, 2017, 46(2): 357-363. http://d.wanfangdata.com.cn/Periodical/dzkjdxxb201702007

    FU Zhizhong, WANG Xue, LI Xiaofeng, et al. Infrared and Visible Image Fusion Based on Visual Saliency and NSCT[J]. Journal of University of Electronic Science and Technology of China, 2017, 46(2): 357-363. http://d.wanfangdata.com.cn/Periodical/dzkjdxxb201702007
    [11] 林子慧, 魏宇星, 张建林, 等.基于显著性图的红外与可见光图像融合[J].红外技术, 2019, 41(7): 640-646. http://www.cnki.com.cn/Article/CJFDTotal-HWJS201907009.htm

    LIN Zihui, WEI Yuxin, ZHANG Jianlin, et al. Image Fusion of Infrared and Visible Images Based on Saliency Map[J]. Infrared Technology, 2019, 41(7): 640-646. http://www.cnki.com.cn/Article/CJFDTotal-HWJS201907009.htm
    [12] 安影, 范训礼, 陈莉, 等.结合FABEMD和改进的显著性检测的图像融合[J].系统工程与电子技术, 2020, 42(2): 292-300. http://d.wanfangdata.com.cn/periodical/xtgcydzjs202002006

    AN Ying, FAN Xunli, CHEN Li, et al. Image fusion combining with FABEMD and improved saliency detection[J]. Systems Engineering and Electronics, 2020, 42(2): 292-300. http://d.wanfangdata.com.cn/periodical/xtgcydzjs202002006
    [13] Nunes J C, Bouaoune Y, Delechelle E, et al. Image analysis by bidimensional empirical mode decomposition[J]. Image & Vision Computing, 2003, 21(12): 1019-1026. http://www.sciencedirect.com/science/article/pii/S0262885603000945
    [14] HUANG N E, SHEN Z, LONG S R, et al. The empirical mode de-composition and the Hilbert spectrum for nonlinear and non-stationary time series analysis[J]. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 1998, 454(1971): 903-995. doi:  10.1098/rspa.1998.0193
    [15] Bidimensional empirical model decomposition method for image processing in sensing system[J]. Computers and Electrical Engineering, 2018, 68: 215 -224.
    [16] 王笛, 沈涛, 孙宾宾, 等.基于大气灰度因子的红外图像增强算法[J].激光与红外, 2019, 49(9): 1135-1140. http://kns.cnki.net/KCMS/detail/detail.aspx?dbcode=CJFD&filename=JGHW201909019

    WANG Di, SHEN Tao, SUN Binbin, et al. Infrared image enhancement algorithm based on atmospheric gray factor[J]. Laser & Infrared, 2019, 49(9): 1135-1140. http://kns.cnki.net/KCMS/detail/detail.aspx?dbcode=CJFD&filename=JGHW201909019
    [17] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17. http://www.sciencedirect.com/science/article/pii/S1350449516305928
    [18] ZHAI Y, SHAH M. Visual attention detection in video sequences using spatiotemporal cues[C]//Proceedings of the 14th Annual ACM International Conference on Multimedia, 2006: 815-824.
    [19] CHENG Mingming, Niloy J Mitra, HUANG Xiaolei, et al. Global Contrast based Salient Region Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37: 569-582. http://ieeexplore.ieee.org/document/6871397
    [20] Achanta R, Estrada F, Wils P, et al. Salient region detection and segmentation[C]//International Conference on Computer Vision Systems, 2008: 66-75.
    [21] 杨爱萍, 王海新, 王金斌, 等.基于透射率融合与多重导向滤波的单幅图像去雾[J].光学学报, 2019, 38(12): 104-114. http://www.cnki.com.cn/Article/CJFDTotal-GXXB201812014.htm

    YANG Aiping, WANG Haixin, WANG Jinbin, et al. Image Dehazing Based on Transmission Fusion and Multi-Guided Filtering[J]. Acta Optica Sinica, 2019, 38(12): 104-114. http://www.cnki.com.cn/Article/CJFDTotal-GXXB201812014.htm
    [22] CUI G, FENG H, XU Z, et al. Detail preserved usion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. doi:  10.1016/j.optcom.2014.12.032
    [23] Piella G, Heijmans H. A new quality metric for image fusion[C]// Proceedings of the International Conference on Image Processing, 2003: 173-176.
    [24] QU G, ZHANG D, YAN P. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315. doi:  10.1049/el:20020212
    [25] WANG Z, A C Bovik, H R Sheikh, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612. http://jamia.bmj.com/external-ref?access_num=10.1109/TIP.2003.819861&link_type=DOI
    [26] Xydeas CS, Petrovic V. Objective image fusion performance measure[J]. Electronics Letters, 2000, 36(4): 308-309. doi:  10.1049/el:20000267
    [27] J W Roberts, J Van Aardt, F Ahmed. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008, 2(1): 023522. doi:  10.1117/1.2945910
  • 加载中
图(10) / 表(3)
计量
  • 文章访问数:  213
  • HTML全文浏览量:  39
  • PDF下载量:  27
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-01-23
  • 修回日期:  2020-10-28
  • 刊出日期:  2020-11-20

目录

    /

    返回文章
    返回