[1]姜晓林,王志社.可见光与红外图像结构组双稀疏融合方法研究[J].红外技术,2020,42(3):272-278.[doi:10.11846/j.issn.1001_8891.202003010]
 JIANG Xiaolin,WANG Zhishe.Visible and Infrared Image Fusion Based on Structured Group and Double Sparsity[J].Infrared Technology,2020,42(3):272-278.[doi:10.11846/j.issn.1001_8891.202003010]
点击复制

可见光与红外图像结构组双稀疏融合方法研究
分享到:

《红外技术》[ISSN:1001-8891/CN:CN 53-1053/TN]

卷:
42卷
期数:
2020年第3期
页码:
272-278
栏目:
出版日期:
2020-03-23

文章信息/Info

Title:
Visible and Infrared Image Fusion Based on Structured Group and Double Sparsity

文章编号:
1001-8891(2020)05-0272-07
作者:
姜晓林王志社
太原科技大学 应用科学学院
Author(s):
JIANG XiaolinWANG Zhishe
School of Applied Science, Taiyuan University of Science and Technology
关键词:
图像融合非局部相似性结构组双稀疏模型
Keywords:
image fusion non-local self-similarity structured group double sparsity model
分类号:
TP391.41
DOI:
10.11846/j.issn.1001_8891.202003010
文献标志码:
A
摘要:
传统的可见光与红外稀疏表示融合方法,采用图像块构造解析字典或者学习字典,利用字典的原子表征图像的显著特征。这类方法存在两个问题,一是没有考虑图像块与块之间的联系,二是字典的适应能力不够并且复杂度高。针对这两个问题,本文提出可见光与红外图像结构组双稀疏融合方法。该方法首先利用图像的非局部相似性,将图像块构建成图像相似结构组,然后对图像相似结构组进行字典训练,采用双稀疏分解模型,有效结合解析字典和学习字典的优势,降低了字典训练的复杂度,得到的结构字典更加灵活,适应性提高。该方法能够有效提高红外与可见光融合图像的视觉效果,经对比实验分析,在主观和客观评价上都优于传统的稀疏表示融合方法。
Abstract:
In the traditional visible and infrared image fusion based on sparse representation, the analytical and learning dictionaries are constructed by using image blocks, and the atoms of the dictionaries are used to represent the salient features of the image. This method creates two problems. First, the relationships among the patches are ignored. Second, the dictionaries have poor adaptability and are complicated to learn. Aiming at solving these two problems, a visible and infrared image fusion method based on a structured group and double sparsity is proposed in this study. Image blocks are constructed into similarity structure groups by using the non-local similarity of the image. Then, the dictionary is built based on similarity structure groups and a double sparsity model to reduce the complexity of dictionary training, thereby improving the analytical and learning dictionaries. The obtained training dictionary is more adaptable, and the complexity of dictionary training is reduced. The experimental results demonstrate that compared with the traditional sparse representation fusion method, this method can effectively improve the visual effect of the fused image and is superior in terms of both subjective and objective evaluation.

参考文献/References:

[1]? MA J Y, MAY, LIC. Infrared and visible image fusion methods and applications: A survey[J]. Information Fusion, 2019(45): 153-178.
[2]? WANGZ S, YANGF B, PENG Z H, et al. Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation[J]. Optik- International Journal for Light and Electron Optics, 2015, 126(23): 4184-4190.
[3]? Aishwarya N, ThangammalC B. An image fusion framework using novel dictionary based sparse representation[J]. Multimedia Tools and Applications, 2017, 76(11): 21869-21888.
[4]? CHANGL H, FENG X C, ZHANG R, et al. Image decomposition fusion method based on sparse representation and neural network[J]. Applied Optics, 2017, 56(28): 7969-7977.
[5]? Kim M, Han D K, Ko H. Joint patch clustering-based dictionary learning for multimodal image fusion[J]. Information Fusion, 2016(27): 198-214.
[6]? ZHU Z Q, YIN H P, CHAI Y, et al. A Novel Multi-modality Image Fusion Method Based on Image Decomposition and Sparse Representation[J]. Information Sciences, 2018(432): 516-529.
[7]? WANG R, DU L F. Infrared and visible image fusion based on random projection and sparse representation[J]. International Journal of Remote Sensing, 2014, 35(5): 1640-1652.
[8]? LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics and Technology, 2017(83): 94-102.
[9]? YIN M, DUAN P H, LIU W, et al. A novel infrared and visible image fusion algorithm based on shift-invariant dual-tree complex shearlet transform and sparse representation[J]. Neurocomputing, 2016, 226(22): 182-191.
[10]? ZHANG J, ZHAO D B, WEN G. Group-based sparse representation for image restoration[J]. IEEE Transactions on Image Processing, 2014, 23(8): 3336-3351.
[11]? WU Y, FANG L Y, LI S T, Weighted Tensor Rank-1 Decomposition for Nonlocal Image Denoising[J]. IEEE Transactions on Image Processing, 2019, 28(6): 2719-2730.
[12]? Eslahi N, Aghagolzadeh A. Compressive Sensing Image Restoration Using Adaptive Curvelet Thresholding and Nonlocal Sparse Regularization[J]. IEEE Transactions on Image Processing, 2016, 25(7): 3126-3140.
[13]? CHEN H,HE X, TENG Q, et al. Single image super resolution using local smoothness and nonlocal self-similarity priors[J]. Signal Processing: Image Communication, 2016(43): 68-81.
[14]? 张晓, 薛月菊, 涂淑琴, 等. 基于结构组稀疏表示的遥感图像融合[J]. 中国图象图形学报, 2016, 21(8): 1106-1118.
? ? ?ZHANG Xiao, XUE Yueju, TU Shuqin, et al. Remote sensing image fusion based on structuralgroup sparse representation[J]. Journal of Image and Graphics, 2016, 21(8): 1106-1118.
[15]? LI Y, LI F, BAI B, et al. Image fusion via nonlocal sparse K-SVD dictionary learning[J]. Applied Optics, 2016, 55(7): 1814-1823.
[16]? BIN Y, CHAO Y, GUO Y H. Efficient image fusion with approximate sparse representation[J]. International Journal of Wavelets Multiresolution and Information Processing, 2016, 14(4): 1650024- 1650039.
[17]? WANG K P, QI G Q, ZHU Z Q, et al. A Novel Geometric Dictionary Construction Approach for Sparse Representation Based Image Fusion[J]. Entropy, 2017, 19(7): 306-323.
[18]? Rubinstein R, Zibulevsky M, Elad M. Double Sparsity: Learning Sparse Dictionaries for Sparse Signal Approximation[J]. IEEE Transactions on Signal Processing, 2010, 58(3): 1553-1564.
[19]? Sulam J, Ophir B, Zibulevsky M, et al. Trainlets: Dictionary Learning in High Dimensions[J]. IEEE Transactions on Signal Processing, 2016, 64(12): 3180-3193.
[20]? ZHANG Q, LIU Y, S Blum R, et al. Sparse Representation based Multi-sensor Image Fusion for Multi-focus and Multi-modality Images: A Review[J]. Information Fusion, 2017(40): 57-75.

相似文献/References:

[1]周萧,杨风暴,蔺素珍,等. 基于自适应滑动窗口的双色中波红外图像融合方法研究[J].红外技术,2013,35(04):227.
 ZHOU Xiao,YANG Feng-bao,LIN Su-zhen,et al.The Study on Fusion Method of Dual-color MWIR Images Based on Adaptive Sliding Window[J].Infrared Technology,2013,35(3):227.
[2]安富,杨风暴,蔺素珍,等. 基于局部能量与模糊逻辑的红外偏振图像融合[J].红外技术,2012,34(10):573.
 AN Fu,YANG Feng-bao,LIN Su-zhen,et al.Infrared Polarization Images Fusion Based on Local Energy and Fuzzy Logic[J].Infrared Technology,2012,34(3):573.
[3]杨 锋,张俊举,许 辉,等.一种图像融合算法硬件实现[J].红外技术,2013,35(09):541.[doi:10.11846/j.issn.1001_8891.201309003]
 YANG Feng,ZHANG Jun-ju,XU Hui,et al.Hardware Implementation of an Image Fusion Method[J].Infrared Technology,2013,35(3):541.[doi:10.11846/j.issn.1001_8891.201309003]
[4]韩 博,张鹏辉,许 辉,等.基于区域的二维经验模式分解的图像融合算法[J].红外技术,2013,35(09):546.[doi:10.11846/j.issn.1001_8891.201309004]
 HAN Bo,ZHANG Peng-hui,XU Hui,et al.Region-based image fusion algorithm using bidimensional empirical mode decomposition[J].Infrared Technology,2013,35(3):546.[doi:10.11846/j.issn.1001_8891.201309004]
[5]何永强,王群,王国培,等.基于融合和色彩传递的灰度图像彩色化技术[J].红外技术,2012,34(05):276.
 HE Yong-qiang,WANG Qun,WANG Guo-pe,et al.Gray Image Colorization Based on Fusion and Color Transfer[J].Infrared Technology,2012,34(3):276.
[6]李伟伟,杨风暴,蔺素珍,等.红外偏振与红外光强图像的伪彩色融合研究[J].红外技术,2012,34(02):109.
 LI Wei-wei,YANG Feng-bao,LIN Su-zhen,et al.Study on Pseudo-color Fusion of Infrared Polarization and Intensity Image[J].Infrared Technology,2012,34(3):109.
[7]徐中中,曲仕茹.新型可见光和红外图像融合综合评价方法[J].红外技术,2011,33(10):568.
 XU Zhong-zhong,QU Shi-ru.A New Comprehensive Evaluation of Visible and Infrared Image Fusion[J].Infrared Technology,2011,33(3):568.
[8]薛模根,刘存超,徐国明,等.基于多尺度字典的红外与微光图像融合[J].红外技术,2013,35(11):696.[doi:10.11846/j.issn.1001_8891.201311005]
 XUE Mo-gen,LIU Cun-chao,XU Guo-ming,et al.Infrared and Low Light Level Image Fusion Based on Multi-scale Dictionary[J].Infrared Technology,2013,35(3):696.[doi:10.11846/j.issn.1001_8891.201311005]
[9]孙爱平,龚杨云,朱尤攀,等.微光与红外图像融合手持观察镜光学系统设计[J].红外技术,2013,35(11):712.[doi:10.11846/j.issn.1001_8891.201311008]
 SUN Ai-ping,GONG Yang-yun,ZHU You-pan,et al.Optical System Design of Low-light-level and Infrared Image Fusion Hand-held Viewer[J].Infrared Technology,2013,35(3):712.[doi:10.11846/j.issn.1001_8891.201311008]
[10]何永强,周云川,仝红玉,等.基于YCBCR空间颜色传递的融合图像目标检测算法[J].红外技术,2011,33(06):349.
 HE Yong-qiang,ZHOU Yun-chuan,TONG Hong-yu,et al.A Target Detection Method Based on Color Transferin YCBCR Space for Fused Image[J].Infrared Technology,2011,33(3):349.

备注/Memo

备注/Memo:
收稿日期:2019-07-25;修订日期:2019-12-24.
作者简介:姜晓林(1994-),女,硕士研究生,研究方向为图像融合。E-mail:haoxiaolin2@126.com。
通信作者:王志社(1982-),男,副教授,博士,研究方向为红外图像处理、多模态图像配准和图像融合。E-mail:wangzs@tyust.edu.cn。
基金项目:山西省高等学校科技创新项目(2017162);太原科技大学博士启动基金(20162004);山西省“1331”工程重点创新团队建设计划资助(2019 3-3);
山西省面上自然基金项目(201901D111260)

更新日期/Last Update: 2020-03-17