[1]董安勇,杜庆治,苏斌,等.基于卷积神经网络的红外与可见光图像融合[J].红外技术,2020,42(7):660-669.[doi:10.11846/j.issn.1001_8891.202007009]
 DONG Anyong,DU Qingzhi,SU Bin,et al.Infrared and Visible Image Fusion Based on Convolutional Neural Network[J].Infrared Technology,2020,42(7):660-669.[doi:10.11846/j.issn.1001_8891.202007009]
点击复制

基于卷积神经网络的红外与可见光图像融合
分享到:

《红外技术》[ISSN:1001-8891/CN:CN 53-1053/TN]

卷:
42卷
期数:
2020年第7期
页码:
660-669
栏目:
出版日期:
2020-07-23

文章信息/Info

Title:
Infrared and Visible Image Fusion Based on Convolutional Neural Network
文章编号:
TP751.1
作者:
董安勇1杜庆治1苏斌2赵文博2于闻2
1. 昆明理工大学 信息工程与自动化学院;
2. 昆明北方红外技术股份有限公司
Author(s):
DONG Anyong1DU Qingzhi1SU Bin2ZHAO Wenbo2YU Wen2
1.Kunming University of Science and Technology, Faculty of Information Engineering and Automation;?
2. Kunming North Infrared Technology Co., Ltd

关键词:
图像融合卷积神经网络参数自适应脉冲耦合神经网络NSST变换
Keywords:
image fusion convolutional neural network parameter adaptive pulse coupled neural network NSST transform
分类号:
TP751.1
DOI:
10.11846/j.issn.1001_8891.202007009
文献标志码:
A
摘要:
非下采样剪切波变换(NSST)域中低频子带的融合需要人工给定融合模式,因此未能充分捕获源图像的空间连续性和轮廓细节信息。针对上述问题,提出了基于深度卷积神经网络的红外与可见光图像融合算法。首先,使用孪生双通道卷积神经网络学习NSST域低频子带的特征来输出衡量子带空间细节信息的特征图。然后,根据高斯滤波处理的特征图设计了基于局部相似性的测量函数来自适应地调整NSST域低频子带的融合模式。最后,根据NSST域高频子带的方差、局部区域能量以及可见度特征来自适应地设置脉冲耦合神经网络参数完成NSST域高频子带的融合。实验结果表明:该算法QAB/F指标略弱于对比算法,但SF、SP、SSIM以及VIFF指标分别提高了约50.42%、14.25%、7.91%以及61.67%,有效地解决了低频子带融合模式给定的问题,同时又克服了手动设置PCNN参数的缺陷。
Abstract:
The fusion of the low-frequency subband in the non-subsampled shearlet transform (NSST) domain requires artificially obtained fusion modes; thus, the spatial continuity and contour detail information of the source image are not adequately captured. An infrared and visible image fusion algorithm based on a convolutional neural network is proposed to solve this problem. First, the Siamese convolutional neural network is used to learn the characteristics of the low-frequency subband in the NSST domain and output a feature map that measures the spatial detail information of the subbands. Then, on the basis of the feature map obtained by Gaussian filter processing, a local-similarity-based measurement function is designed to adaptively adjust the fusion mode of the low-frequency subband in the NSST domain. Finally, on the basis of the variance of the high-frequency subband in the NSST domain, the local region energy, and the visibility characteristics, the pulse-coupled neural network (PCNN) parameters are adaptively set to complete the fusion of the high-frequency subband in the NSST domain. Experimental results show that the QAB/F index of the algorithm is slightly lower than that of the comparison algorithm. However, the spatial frequency, SP, structural similarity, and visual information fidelity for fusion are improved by approximately 50.42%, 14.25%, 7.91%, and 61.67%, respectively, which indicates that the method effectively solves the low-frequency subband fusion mode. It also eliminates the need to manually set the PCNN parameters to solve this problem.

参考文献/References:

[1]? 杨勇, 万伟国, 黄淑英, 等. 稀疏表示和非下采样Shearlet变换相结合的多聚焦图像融合[J]. 小型微型计算机系统, 2017, 38(2): 386-392.
YANG Yong, WAN Weiguo, HUANG Shuying, et al. Multi focus image fusion based on sparse representation and Nonsubsampled shearlet transform[J]. Minicomputer System, 2017, 38(2): 386-392.
[2]? 李娇, 杨艳春, 党建武, 等. NSST与引导滤波相结合的多聚焦图像融合算法[J]. 哈尔滨工业大学学报, 2018, 50(11): 145-152.
LI Jiao, YANG Yanchun, DANG Jianwu, et al. Multi focus image fusion algorithm based on NSST and guided filtering[J]. Journal of Harbin Institute of Technology, 2018, 50(11): 145-152.
[3]? LIU Y , CHEN X , CHENG J , et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018: S0219691318500182.
[4]? DU Chaoben, GAO Shesheng. Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network[J]. IEEE Access, 2017, 5: 15750-15761.
[5]? 朱芳, 刘卫. 基于自适应PCNN模型的四元数小波域图像融合算法[J].红外技术, 2018, 40(7): 660-667.
ZHU Fang, LIU Wei. Quaternion wavelet domain image fusion algorithm based on adaptive PCNN model [J]. Infrared Technology, 2018, 40(7): 660-667.
[6]? CHEN Y, PARK S K, MA Y, et al. A new automatic parameter setting method of a simplified PCNN for image segmentation[J]. IEEE Trans. Neural Netw., 2011, 22(6): 880-892.
[7]? KONG W W. Multi-sensor image fusion based on NSST domain I2CM[J]. Electronics Letters, 2013, 49(13): 802-803.
[8]? YANG Y, YANG M, HUANG S, et al. Robust sparse representation combined with adaptive PCNN for multifocusimage fusion[J]. IEEE Access, 2018, 6: 20138-20151.
[9]? Ouerghi H, Mourali O, Ezzeddine Z. Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ color space[J]. IET Image Processing, 2018, 12(10): 1873-1880.
[10]? CHEN Y, PARK S, MA Y, et al. A new automatic parameter setting method of a simplified PCNN for image segmentation[J]. IEEE Transactions on Neural Networks, 2011, 22(6): 880-892.
[11]? YU Y, GONG Z, WANG C, et al. An unsupervised convolutional feature fusion network for deep representation of remote sensing images[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(1): 23-27.
[12]? Zagoruyko S, Komodakis N. Learning to compare Image patches via convolutional neural networks[C]//Computer Vision and Pattern Recognitionof IEEE, 2015: 4353-4361.
[13]? 戴进墩, 刘亚东, 毛先胤, 等. 基于FDST和双通道PCNN的红外与可见光图像融合[J]. 红外与激光工程, 2019, 48(2): 67-74.
DAI Jindun, LIU Yadong, MAO Xianyin, et al. Infrared and visible image fusion based on fdst and dual channel PCNN[J]. Infrared and Laser Engineering, 2019, 48(2): 67-74.
[14]? 郝文超, 贾年. NSCT域内基于自适应PCNN的红外与可见光图像融合方法[J]. 西华大学学报: 自然科学版, 2014, 33(3): 11-14.
HAO Wenchao, JIA Nian. Infrared and visible image fusion method based on adaptive PCNN in NSCT domain[J]. Journal of Xihua University: Natural Science Edition, 2014, 33(3): 11-14.
[15]? 张生伟, 李伟, 赵雪景. 一种基于稀疏表示的可见光与红外图像融合方法[J]. 电光与控制, 2017, 24(6): 47-52.
ZHANG Shengwei, LI Wei, ZHAO Xuejing. A visible and infrared image fusion method based on sparse representation[J]. Electrooptics and Control, 2017, 24(6): 47-52.
[16]? GAO Guorong, XU Luping, FENG Dongzhu. Multi-focus image fusion based on non-subsampled shearlet transform[J]. IET Image Processing, 2013, 7(6): 633-639.
[17]? YIN Ming, LIU Xiaoning, LIU Yu, et al. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 68(1): 49-64.
[18]? 梅礼晔, 郭晓鹏, 张俊华, 等. 基于空间金字塔池化的深度卷积神经网络多聚焦图像融合[J]. 云南大学学报: 自然科学版, 2019, 41(1): 18-27.
MEI Liye, GUO Xiaopeng, ZHANG Junhua, et al. Multi focus image fusion using deep convolution neural network based on spatial pyramid pooling[J]. Journal of Yunnan University: Natural Science Edition, 2019, 41(1): 18-27.
[19] Sheikh H R, Bovik A C. Image information and visual quality[J]. IEEE Transactions on Image Processing, 2006, 15(2): 430-444.

相似文献/References:

[1]周萧,杨风暴,蔺素珍,等. 基于自适应滑动窗口的双色中波红外图像融合方法研究[J].红外技术,2013,35(04):227.
 ZHOU Xiao,YANG Feng-bao,LIN Su-zhen,et al.The Study on Fusion Method of Dual-color MWIR Images Based on Adaptive Sliding Window[J].Infrared Technology,2013,35(7):227.
[2]安富,杨风暴,蔺素珍,等. 基于局部能量与模糊逻辑的红外偏振图像融合[J].红外技术,2012,34(10):573.
 AN Fu,YANG Feng-bao,LIN Su-zhen,et al.Infrared Polarization Images Fusion Based on Local Energy and Fuzzy Logic[J].Infrared Technology,2012,34(7):573.
[3]杨 锋,张俊举,许 辉,等.一种图像融合算法硬件实现[J].红外技术,2013,35(09):541.[doi:10.11846/j.issn.1001_8891.201309003]
 YANG Feng,ZHANG Jun-ju,XU Hui,et al.Hardware Implementation of an Image Fusion Method[J].Infrared Technology,2013,35(7):541.[doi:10.11846/j.issn.1001_8891.201309003]
[4]韩 博,张鹏辉,许 辉,等.基于区域的二维经验模式分解的图像融合算法[J].红外技术,2013,35(09):546.[doi:10.11846/j.issn.1001_8891.201309004]
 HAN Bo,ZHANG Peng-hui,XU Hui,et al.Region-based image fusion algorithm using bidimensional empirical mode decomposition[J].Infrared Technology,2013,35(7):546.[doi:10.11846/j.issn.1001_8891.201309004]
[5]何永强,王群,王国培,等.基于融合和色彩传递的灰度图像彩色化技术[J].红外技术,2012,34(05):276.
 HE Yong-qiang,WANG Qun,WANG Guo-pe,et al.Gray Image Colorization Based on Fusion and Color Transfer[J].Infrared Technology,2012,34(7):276.
[6]李伟伟,杨风暴,蔺素珍,等.红外偏振与红外光强图像的伪彩色融合研究[J].红外技术,2012,34(02):109.
 LI Wei-wei,YANG Feng-bao,LIN Su-zhen,et al.Study on Pseudo-color Fusion of Infrared Polarization and Intensity Image[J].Infrared Technology,2012,34(7):109.
[7]徐中中,曲仕茹.新型可见光和红外图像融合综合评价方法[J].红外技术,2011,33(10):568.
 XU Zhong-zhong,QU Shi-ru.A New Comprehensive Evaluation of Visible and Infrared Image Fusion[J].Infrared Technology,2011,33(7):568.
[8]薛模根,刘存超,徐国明,等.基于多尺度字典的红外与微光图像融合[J].红外技术,2013,35(11):696.[doi:10.11846/j.issn.1001_8891.201311005]
 XUE Mo-gen,LIU Cun-chao,XU Guo-ming,et al.Infrared and Low Light Level Image Fusion Based on Multi-scale Dictionary[J].Infrared Technology,2013,35(7):696.[doi:10.11846/j.issn.1001_8891.201311005]
[9]孙爱平,龚杨云,朱尤攀,等.微光与红外图像融合手持观察镜光学系统设计[J].红外技术,2013,35(11):712.[doi:10.11846/j.issn.1001_8891.201311008]
 SUN Ai-ping,GONG Yang-yun,ZHU You-pan,et al.Optical System Design of Low-light-level and Infrared Image Fusion Hand-held Viewer[J].Infrared Technology,2013,35(7):712.[doi:10.11846/j.issn.1001_8891.201311008]
[10]何永强,周云川,仝红玉,等.基于YCBCR空间颜色传递的融合图像目标检测算法[J].红外技术,2011,33(06):349.
 HE Yong-qiang,ZHOU Yun-chuan,TONG Hong-yu,et al.A Target Detection Method Based on Color Transferin YCBCR Space for Fused Image[J].Infrared Technology,2011,33(7):349.

备注/Memo

备注/Memo:
收稿日期:2019-08-13;修订日期:2020-07-02.
作者简介:董安勇(1992-),男,硕士研究生,主要研究方向为人工智能,机器学习,图像处理。
通信作者:杜庆治(1977-),男,高级实验师,硕士生导师,主要研究方向为通信与信息系统。E-mail:57960748@qq.com。
基金项目:昆明市科技局科技成果推广应中及科技惠民计划项目(昆科计字2016-2-G-05372)。

更新日期/Last Update: 2020-07-16