[1]苑玉彬,彭静,沈瑜,等.基于Tetrolet变换的近红外与彩色可见光图像融合算法研究[J].红外技术,2020,42(3):223-230.[doi:10.11846/j.issn.1001_8891.202003004]
 YUAN Yubin,PENG Jing,SHEN Yu,et al.Fusion Algorithm for Near-Infrared and Color Visible Images Based on Tetrolet Transform[J].Infrared Technology,2020,42(3):223-230.[doi:10.11846/j.issn.1001_8891.202003004]
点击复制

基于Tetrolet变换的近红外与彩色可见光图像融合算法研究
分享到:

《红外技术》[ISSN:1001-8891/CN:CN 53-1053/TN]

卷:
42卷
期数:
2020年第3期
页码:
223-230
栏目:
出版日期:
2020-03-23

文章信息/Info

Title:
Fusion Algorithm for Near-Infrared and Color Visible Images Based on Tetrolet Transform

文章编号:
1001-8891(2020)05-0223-08
作者:
苑玉彬彭静沈瑜陈小朋
兰州交通大学 电子与信息工程学院
Author(s):
YUAN YubinPENG JingSHEN YuCHEN Xiaopeng
School of Electronic and Information Engineering, Lanzhou Jiaotong University
关键词:
彩色图像融合Tetrolet变换期望最大算法自适应脉冲耦合神经网络
Keywords:
color image fusion Tetrolet transform expectation maximization algorithm adaptive pulse coupled neural network
分类号:
TP391
DOI:
10.11846/j.issn.1001_8891.202003004
文献标志码:
A
摘要:
针对近红外与彩色可见光图像融合后出现的对比度降低、细节丢失、颜色失真等问题,提出一种基于Tetrolet变换和自适应脉冲耦合神经网络PCNN(PCNN-Pulse Coupled Neural Network)的近红外与彩色可见光图像融合的新算法。首先将彩色可见光源图像转换到各个分量相对独立的HSI空间(HSI-Hue Saturation Intensity),将其亮度分量与近红外图像进行Tetrolet分解,对分解后得到的低频系数,提出一种从给定不完备数据集中寻找潜在分布最大似然估计的期望最大算法融合规则;对分解后得到的高频系数,采用一种Sobel算子自动调节阈值的自适应PCNN模型作为融合规则;处理后的高低频图像经Tetrolet逆变换作为融合后的亮度图像,提出一种饱和度分量自适应拉伸方法来解决图像饱和度下降的问题。处理后的各个分量反向映射到RGB空间,完成融合。将本文算法与多种高效融合算法进行对比分析,实验表明,本方法取得的图像,细节清晰,色彩对比度得到提升,在图像饱和度、颜色恢复性能、结构相似性和对比度等客观评价指标上均具有明显的优势。
Abstract:
To address the problems of low contrast, loss of detail, and color distortion after near-infrared and color visible image fusion, an algorithm for near-infrared and color visible image fusion based on the Tetrolet transform and pulse coupled neural network (PCNN) is proposed. First, the color visible light source image is transformed into a hue saturation intensity space, where each component is relatively independent, and its brightness component is decomposed into a near-infrared image by Tetrolet decomposition. Subsequently, a fusion rule for expectation maximization likelihood estimation of the potential distribution from a given incomplete dataset is proposed. A self-adaptive PCNN model with a Sobel operator that automatically adjusts the threshold is used as a fusion rule, and the processed high-frequency and low-frequency images are fused by Tetrolet inverse transformation as brightness images. An adaptive stretching method for the saturation component is proposed to solve the problem of image saturation decline. The processed components are mapped back to red–green–blue space to complete the fusion. The proposed algorithm was compared with several efficient fusion algorithms. The experimental results show that the image obtained by this method has clear details and improved color contrast. It has obvious advantages in image saturation, color restoration performance, structural similarity, and contrast.

参考文献/References:

[1] Wenshan D, Duyan B, Linyuan H, et al. Infrared and visible image fusion method based on sparse features[J]. Infrared Physics & Technology, 2018, 92: 372-380.
[2] LI J, SONG M, PENG Y. Infrared and visible image fusion based on robust principal component analysis and compressed sensing[J]. Infrared Physics & Technology, 2018, 89: 129-139.
[3] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102.
[4] Krommweh J. Tetrolet transform: A new adaptive Haar wavelet algorithm for sparse image representation[J]. Journal of Visual Communication and Image Representation, 2010, 21(4): 364-374.
[5] LIU H X, ZHU T H, ZHAO J J. Infrared and Visible Image Fusion Based on Region of Interest Detection and Nonsubsampled Contourlet Transform[J]. Journal of shanghai Jiaotong University, 2013, 18(5): 526-534.
[6] KONG W. The Infrared and Visible Light Image Fusion Based on the Non-subsample Shearlet Transform and Heat Source Concentration Ratio[C]// International Conference on Intelligent Networking and Collaborative Systems (INCoS), 2016: 544-547.
[7] Nemalidinne SM, Gupta D. Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering[J]. Fire Safety Journal, 2018, 101: 84-101.
[8] CHENG B, JIN L, LI G. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain[J]. Infrared Physics & Technology, 2018, 91: 153-163.
[9] ZHANG D X, YUAN B H, ZHANG J J. Research on Polarization Image Fusion Using Tetrolet Transform[J]. Applied Mechanics & Materials, 2014, 713-715: 1859-1862.
[10] ZHOU X, WANG W. Infrared and Visible Image Fusion Based on Tetrolet Transform[C]//Proceedings of the 2015 International Conference on Communications, Signal Processing, and Systems, 2015: 701-708.
[11] 冯鑫. Tetrolet框架下红外与可见光图像融合[J]. 光子学报, 2019, 48(2): 76-84.
FENG X. Fusion of Infrared and Visible Images Based on Tetrolet Framework[J]. Acta Photonica Sinica, 2019, 48(2): 76-84.
[12] 孔韦韦, 雷英杰, 雷阳, 等. 基于改进型NSCT变换的灰度可见光与红外图像融合方法[J]. 控制与决策, 2010, 25(11): 1607-1612.
KONG W W, LEI Y J, LEI Y, et al. Fusion method for gray-scale visible light and infrared images based on improved NSCT[J]. Control and Decision, 2010, 25(11): 1607-1612.
[13] Boyang C, Longxu J, Guoning L. Infrared and visual image fusion using LNSST and an adaptive dual-channel PCNN with triple-linking strength[J]. Neurocomputing, 2018, 310: 135-147.
[14] CHEN W, YU Y J, SHI H. An Improvement of Edge-Adaptive Image Scaling Algorithm Based on Sobel Operator[C]//2017 4th International Conference on Information Science and Control Engineering, 2017: 183-186.?
[15] Mittel V, Singh D, Saini L M. A comparative analysis of supervised land cover classification using multi-polarized PALSAR EM image fusion[C]//International Conference on Industrial & Information Systems. IEEE, 2015: 368-393.
[16] 荣传振, 贾永兴, 杨宇, 等. NSCT域内基于自适应PCNN的图像融合新方法[J]. 信号处理, 2017, 33(3): 280-287.
RONG C Z, JIA Y X, YANG Y, et al. A Novel Image Fusion Algorithm Using Adaptive PCNN in NSCT Domain[J]. Journal of Signal Processing, 2017, 33(3): 280-287.

相似文献/References:

[1]严敏,杨智雄,余春超,等.基于CPCT的彩色图像融合算法[J].红外技术,2015,37(七):566.[doi:10.11846/j.issn.1001_8891.201507005]
 YAN Min,YANG Zhi-xiong,YU Chun-chao,et al.Color Image Fusion Algorithm Based On CPCT[J].Infrared Technology,2015,37(3):566.[doi:10.11846/j.issn.1001_8891.201507005]
[2]李婵飞,邓奕.利用YCbCr和SWT实现彩色图像融合[J].红外技术,2018,40(7):654.[doi:10.11846/j.issn.1001_8891.201807006]
 LI Chanfei,DENG Yi.Color Image Fusion Utilizing YCbCr Transform and SWT[J].Infrared Technology,2018,40(3):654.[doi:10.11846/j.issn.1001_8891.201807006]

备注/Memo

备注/Memo:
收稿日期:2019-10-11;修订日期:2020-03-06.
作者简介:苑玉彬(1995-),男,山东菏泽人,硕士研究生,主要研究方向为图像处理与计算机视觉。E-mail:164821193@qq.com。
基金项目:国家自然科学基金项目(61861025,61562057,61663021,61761027,51669010),长江学者和创新团队发展计划(IRT_16R36),光电技术与智能控制教育部重点实验室(兰州交通大学)开放课题(KFKT2018-9),兰州市人才创新创业项目(2018-RC-117)。

更新日期/Last Update: 2020-03-17