基于变换域VGGNet19的红外与可见光图像融合

李永萍, 杨艳春, 党建武, 王阳萍

李永萍, 杨艳春, 党建武, 王阳萍. 基于变换域VGGNet19的红外与可见光图像融合[J]. 红外技术, 2022, 44(12): 1293-1300.
引用本文: 李永萍, 杨艳春, 党建武, 王阳萍. 基于变换域VGGNet19的红外与可见光图像融合[J]. 红外技术, 2022, 44(12): 1293-1300.
LI Yongping, YANG Yanchun, DANG Jianwu, WANG Yangping. Infrared and Visible Image Fusion Based on Transform Domain VGGNet19[J]. Infrared Technology , 2022, 44(12): 1293-1300.
Citation: LI Yongping, YANG Yanchun, DANG Jianwu, WANG Yangping. Infrared and Visible Image Fusion Based on Transform Domain VGGNet19[J]. Infrared Technology , 2022, 44(12): 1293-1300.

基于变换域VGGNet19的红外与可见光图像融合

基金项目: 

长江学者和创新团队发展计划资助 IRT_16R36

国家自然科学基金 62067006

甘肃省科技计划项目 18JR3RA104

甘肃省高等学校产业支撑计划项目 2020C-19

兰州市科技计划项目 2019-4-49

甘肃省教育厅:青年博士基金项目 2022QB-067

甘肃省自然科学基金 21JR7RA300

兰州交通大学天佑创新团队 TY202003

兰州交通大学-天津大学联合创新基金项目 2021052

详细信息
    作者简介:

    李永萍(1996-),女,硕士研究生,主要研究方向:图像融合。E-mail: 2647336295@qq.com

    通讯作者:

    杨艳春(1979-),女,副教授,主要研究方向:图像融合与图像配准。E-mail: yangyanchun102@sina.com

  • 中图分类号: TP391

Infrared and Visible Image Fusion Based on Transform Domain VGGNet19

  • 摘要: 针对红外与可见光图像融合中出现细节信息丢失及边缘模糊的问题,提出一种在变换域中通过VGGNet19网络的红外与可见光图像融合方法。首先,为了使得源图像在分解过程中提取到精度更高的基础与细节信息,将源图像利用具有保边平滑功能的多尺度引导滤波器进行分解,分解为一个基础层与多个细节层;然后,采用具有保留主要能量信息特点的拉普拉斯能量对基础层进行融合得到基础融合图;其次,为了防止融合结果丢失一些细节边缘信息,采用VGGNet19网络对细节层进行特征提取,L1正则化、上采样以及最终的加权平均策略得到融合后的细节部分;最后,通过两种融合图的相加即可得最终的融合结果。实验结果表明,本文方法更好地提取了源图像中的边缘及细节信息,在主观评价以及客观评价指标中均取得了更好的效果。
    Abstract: To address the problems of loss of detailed information and blurred edges in the fusion of infrared and visible images, an infrared and visible image fusion method through the VGGNet19 network in the transform domain is proposed. Firstly, in order to extract more accurate basic and detailed data from the source images during the decomposition process, the source images are decomposed using a multi-scale guided filter with edge-preserving smoothing function into a base layer and multiple detailed layers. Then, the Laplacian energy with the characteristics of retaining the main energy information is used to fuse the basic layer to obtain the basic fusion map. Subsequently, to prevent the fusion result from losing some detailed edge information, the VGGNet19 network is used to extract the features of the detail layers, L1 regularization, upsampling and final weighted average, thus the fused detail. Finally, the final fusion is obtained by adding two fusion graphs. The experimental results show that the method proposed can better extract the edge and detailed information in the source images, and achieve better results in terms of both subjective and objective evaluation indicators.
  • 低照度成像技术是解决低光照(具体指0.1 lux以下)环境获取视频图像的技术。按照是否包含真空系统,低照度成像器件主要分为三类:第一类是利用外光电效应的真空光电子成像器件,比如基于多碱材料体系的超二代微光像增强器、基于GaAs材料体系的三代微光像增强器;第二类是利用内光电效应的固体成像器件,比如基于硅材料体系的电子倍增CCD(EMCCD)/CMOS(EMCMOS)和低照度CMOS成像器件、基于Ⅲ-Ⅴ族InP/InGaAs材料体系的短波红外InGaAs探测器等;第三类是结合真空和固体器件优势的混合型成像器件,如电子轰击CCD(EBCCD)、电子轰击有源像素CMOS器件的EBAPS。为促进我国低照度成像技术尤其是新一代昼夜通用高灵敏度图像传感器EBAPS的发展,2024年10期,《红外技术》推出了“低照度成像技术”专栏,共收录6篇学术论文,其中2篇文章以EBAPS为主题,1篇综述了EBAPS的研究进展,另1篇提出连通域检测算法筛选高亮噪点区域和异常像素点自适应中值替代的离散系数测试方法并研制了EBAPS闪烁噪声系统;与此形成对照的是1篇微光像增强器的闪烁噪声测试方法,结合了离散系数与Harris角点检测;1篇片上集成偏振单元的EMCCD器件,还有2篇聚焦于低照度图像处理方法。专栏旨在为我国相关科研人员和广大读者提供学术参考,为低照度成像技术的创新发展提供一些新思路和新手段。

    最后,感谢各位审稿专家和编辑的辛勤工作。

    ——王岭雪

  • 图  1   VGGNet19网络结构模型

    Figure  1.   VGGNet 19 Network structure model diagram

    图  2   本文算法思路框图

    Figure  2.   Block diagram of the algorithm in this paper

    图  3   实验结果:(a) 红外图像(b) 可见光图像(c) IFCNN (d) CSR (e) JSRSD (f) WLS (g) GSF (h) NSCT (i) Lp-cnn (j) 本文

    Figure  3.   Experimental results: (a) Infrared image(b) Visible image(c) IFCNN(d) CSR(e) JSRSD (f) WLS(g) GSF(h) NSCT(i)Lp-cnn(j) Ours

    图  4   融合结果三维对比分析

    Figure  4.   Three-dimensional comparative analysis chart of fusion results

    图  5   指标对比折线图:(a) FMI-dct;(b) FMI-pixel;(c) FMI-w;(d) QP;(e) QY

    Figure  5.   Indicator comparison line chart: (a) FMI-dct; (b) FMI-pixel; (c) FMI-w; (d) QP; (e) QY

  • [1]

    MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004

    [2] 叶坤涛, 李文, 舒蕾蕾, 等. 结合改进显著性检测与NSST的红外与可见光图像融合方法[J]. 红外技术, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805

    YE Kuntao, LI Wen, SHU Leilei, et al. Infrared and visible image fusion method based on improved saliency detection and non-subsampled Shearlet transform[J]. Infrared Technology, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805

    [3]

    LI Shutao, KANG Xudong, FANG Leyuan, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. DOI: 10.1016/j.inffus.2016.05.004

    [4]

    MA Cong, MIAO Zhenjiang, ZHANG Xiaoping, et al. A saliency prior context model for real-time object tracking[J]. IEEE Transactions on Multimedia, 2017, 19(11): 24152424.

    [5]

    HU Wenrui, YANG Yehui, ZHANG Wensheng, et al. Moving object detection using Tensor based low-rank and saliently fused-sparse decomposition[J]. IEEE Transactions on Image Processing, 2017, 26(2): 724-737. DOI: 10.1109/TIP.2016.2627803

    [6] 杨九章, 刘炜剑, 程阳. 基于对比度金字塔与双边滤波的非对称红外与可见光图像融合[J]. 红外技术, 2021, 43(9): 840-844. http://hwjs.nvir.cn/article/id/1c7de46d-f30d-48dc-8841-9e8bf3c91107

    YANG Jiuzhang, LIU Weijian, CHENG Yang. Asymmetric infrared and visible image fusion based on contrast pyramid and bilateral filtering[J]. Infrared Technology, 2021, 43(9): 840-844. http://hwjs.nvir.cn/article/id/1c7de46d-f30d-48dc-8841-9e8bf3c91107

    [7] 罗迪, 王从庆, 周勇军. 一种基于生成对抗网络与注意力机制的可见光和红外图像融合方法[J]. 红外技术, 2021, 43(6): 566-574. http://hwjs.nvir.cn/article/id/3403109e-d8d7-45ed-904f-eb4bc246275a

    LUO Di, WANG Congqing, ZHOU Yongjun. A visible and infrared image fusion method based on generative adversarial networks and attention mechanism[J]. Infrared Technology, 2021, 43(6): 566-574. http://hwjs.nvir.cn/article/id/3403109e-d8d7-45ed-904f-eb4bc246275a

    [8]

    AZARANG A, HAFEZ E, MANOOCHEHRI, et al. Convolutional autoencoder-based multispectral image fusion[J]. IEEE Access, 2019, 7: 35673-35683. DOI: 10.1109/ACCESS.2019.2905511

    [9]

    HOU Ruichao, ZHOU Dongming, NIE Rencan, et al. VIF-net: an unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2020(6): 640-6521.

    [10]

    LIU Yu, CHEN Xun, HU Peng, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191-207. DOI: 10.1016/j.inffus.2016.12.001

    [11]

    MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004

    [12] 唐丽丽, 刘刚, 肖刚. 基于双路级联对抗机制的红外与可见光图像融合方法[J]. 光子学报, 2021, 50(9): 0910004. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202109035.htm

    TANG Lili, LIU Gang, XIAO Gang. Infrared and visible image fusion method based on dual-path cascade adversarial mechanism[J]. Acta Photonica Sinica, 2021, 50(9): 0910004. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202109035.htm

    [13]

    ZHANG Yu, LIU Yu, SUN Peng, IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011

    [14] 郝永平, 曹昭睿, 白帆, 等. 基于兴趣区域掩码卷积神经网络的红外-可见光图像融合与目标识别算法研究[J]. 光子学报, 2021, 50(2): 0210002. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202102010.htm

    HAO Yongping, CAO Zhaorui, BAI Fan, et al. Research on infrared visible image fusion and target recognition algorithm based on region of interest mask convolution neural network[J]. Acta Photonica Sinica, 2021, 50(2): 0210002. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202102010.htm

    [15] 刘佳, 李登峰. 马氏距离与引导滤波加权的红外与可见光图像融合[J]. 红外技术, 2021, 43(2): 162-169. http://hwjs.nvir.cn/article/id/56484763-c7b0-4273-a087-8d672e8aba9a

    LIU Jia, LI Dengfeng. Infrared and visible light image fusion based on Mahalanobis distance and guided filter weighting[J]. Infrared Technology, 2021, 43(2): 162-169. http://hwjs.nvir.cn/article/id/56484763-c7b0-4273-a087-8d672e8aba9a

    [16]

    LI Hui, WU Xiaojun, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]// 24th International Conference on Pattern Recognition of IEEE, 2018: 8546006-1.

    [17]

    LIU Yu, CHEN Xun, WARD R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. https://ieeexplore.ieee.org/document/7593316/

    [18]

    LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102. https://www.sciencedirect.com/science/article/pii/S1350449516307150

    [19]

    MA Jinlei, ZHOU Zhiqian, WANG Bo, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17. https://www.sciencedirect.com/science/article/pii/S1350449516305928

    [20]

    MA Jiayi, ZHOU Yi. Infrared and visible image fusion via gradientlet filter[J]. Computer Vision and Image Understanding, 2020(197-198): 103016.

    [21]

    QU Xiaobo, YAN Jingwen, XIAO Hongzhi, et al. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain[J]. Acta Automatica Sinica, 2008, 34(12): 1508-1514. https://www.sciencedirect.com/science/article/pii/S1874102908601743

    [22]

    LIU Yu, CHEN Xun, CHENG Juan, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018. DOI: 10.1142/S0219691318500182

    [23]

    HAGHIGHAT M, RAZIAN M A. Fast-FMI: non-reference image fusion metric[C]//International Conference on Application of Information and Communication Technologies(AICT), 2014: 1-3.

图(5)
计量
  • 文章访问数:  192
  • HTML全文浏览量:  54
  • PDF下载量:  56
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-14
  • 修回日期:  2022-02-27
  • 刊出日期:  2022-12-19

目录

    /

    返回文章
    返回