留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于区域间相似度的红外与可见光图像融合算法研究

任全会 孙逸洁 黄灿胜

任全会, 孙逸洁, 黄灿胜. 基于区域间相似度的红外与可见光图像融合算法研究[J]. 红外技术, 2022, 44(5): 492-496.
引用本文: 任全会, 孙逸洁, 黄灿胜. 基于区域间相似度的红外与可见光图像融合算法研究[J]. 红外技术, 2022, 44(5): 492-496.
REN Quanhui, SUN Yijie, HUANG Cansheng. Infrared and Visible Image Fusion Algorithm Based on Regional Similarity[J]. Infrared Technology , 2022, 44(5): 492-496.
Citation: REN Quanhui, SUN Yijie, HUANG Cansheng. Infrared and Visible Image Fusion Algorithm Based on Regional Similarity[J]. Infrared Technology , 2022, 44(5): 492-496.

基于区域间相似度的红外与可见光图像融合算法研究

基金项目: 

2022年度河南省重点研发与推广专项(科技攻关) 222102320125

2022年度河南省高等学校重点科研项目 22B510021

详细信息
    作者简介:

    任全会(1978-),男,汉族,河南郑州人,副教授,主要研究方向:电子系统设计和EDA技术。E-mail: hzkd_2006@163.com

    通讯作者:

    黄灿胜(1970-),男,汉族,广西崇左人,高级实验师,研究方向为人工智能。E-mail: 645948468@qq.com

  • 中图分类号: TN223

Infrared and Visible Image Fusion Algorithm Based on Regional Similarity

  • 摘要: 针对传统的红外图像与可见光图像融合算法存在局部模糊、背景信息不完整的问题,文章提出了一种新的融合算法。使用边缘检测算子实现图像轮廓的提取,同时还进行基于能量的加权融合处理;使用区域间相似度的方法实现信号域的提取,最后根据过信号强度进行图像的融合。为了验证算法的正确性,文章进行了对比测试,同时还使用标准差、信息熵和平均梯度3个参数进行了定量分析,本文方法和传统的加权平均算法相比标准差最大提高106.3%,测试结果表明,本文提出的融合方法融合效果更好,具有一定的实用价值。
  • 图  1  第一组待融合图像

    Figure  1.  The first group of images to be fused

    图  2  第二组待融合图像

    Figure  2.  The second group of images to be fused

    图  3  第一组图像对比测试结果

    Figure  3.  Comparison test results of the first group of images

    图  4  第二组图像对比测试结果

    Figure  4.  Comparison test results of the second group of images

    表  1  第一组图像融合数据

    Table  1.   The first group of image fusion data

    Parameters Fusion algorithm
    Wavelet fusion algorithm Weighted average algorithm IHS transform algorithm Algorithm in this
    paper
    Standard deviation 33.4522 24.9562 25.8361 40.2675
    Information entropy 6.7911 6.3652 6.4102 7.5621
    Average gradient 5.3961 5.3241 5.4103 6.1258
    下载: 导出CSV

    表  2  第二组图像融合数据

    Table  2.   The second group of image fusion data

    Parameters Fusion algorithm
    Wavelet fusion algorithm Weighted average algorithm IHS transform algorithm Algorithm in this
    paper
    Standard deviation 48.2561 29.6521 30.2478 61.1598
    Information entropy 7.2014 6.4567 6.5125 7.6512
    Average gradient 7.4851 6.9654 6.8512 7.8623
    下载: 导出CSV
  • [1] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. doi:  10.1016/j.inffus.2018.02.004
    [2] LIU Y, CHEN X, Ward R K, et al. Medical image fusion via convolutional sparsity based morphological component analysis[J]. IEEE Signal Processing Letters, 2019, 26(3): 485-489. doi:  10.1109/LSP.2019.2895749
    [3] 蔡铠利, 石振刚. 红外图像与可见光图像融合算法研究[J]. 沈阳理工大学学报, 2016(3): 17-22. doi:  10.3969/j.issn.1003-1251.2016.03.004

    CAI Kaili, SHI Zhengang. Research on Image Fusion Algorithm of Infrared and Visible Image[J]. Journal of Shenyang Ligong University, 2016(3): 17-22. doi:  10.3969/j.issn.1003-1251.2016.03.004
    [4] 郝志成, 吴川, 杨航, 等. 基于双边纹理滤波的图像细节增强方法[J]. 中国光学, 2016, 9(4): 423-431. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201604005.htm

    HAO Zhicheng, WU Chuan, YANG Hang, et al. Image detail enhancement method based on multi-scale bilateral texture filter[J]. Chinese Optics, 2016, 9(4): 423-431. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGA201604005.htm
    [5] FU Z, WANG X, LI X, et al. Infrared and visible image fusion based on visual saliency and NSCT[J]. Journal of University of Electronic Science & Technology of China, 2017, 46(2): 357-362.
    [6] DING S, ZHAO X, HUI X, et al. NSCT-PCNN image fusion based on image gradient motivation[J]. IET Computer Vision, 2018, 12(4): 377-383. doi:  10.1049/iet-cvi.2017.0285
    [7] KOU F, LI Z, WEN C, et al. Edge-Preserving smoothing pyramid based multi-scale exposure fusion[J]. Journal of Visual Communication & Image Representation, 2018, 53: 235-244.
    [8] ZHOU Z, BO W, SUN L, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26. doi:  10.1016/j.inffus.2015.11.003
    [9] YANG B, LUO J, GUO L, et al. Simultaneous image fusion and demosaicing via compressive sensing[J]. Information Processing Letters, 2016, 116(7): 447-454. doi:  10.1016/j.ipl.2016.03.001
    [10] ZHANG Y, BAI X, WANG T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure[J]. Information Fusion, 2017, 35: 81-101. doi:  10.1016/j.inffus.2016.09.006
    [11] MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.
    [12] YANG Y, QUE Y, HUANG S, et al. Multiple visual features measurement with gradient domain guided filtering for multisensor image fusion[J]. IEEE Transactions on Instrumentation & Measurement, 2017, 66(4): 691-703.
    [13] ZHANG L, ZENG G, WEI J. Adaptive region-segmentation multi-focus image fusion based on differential evolution[J]. International Journal of Pattern Recognition & Artificial Intelligence, 2018, 33(3): 32.
    [14] YAN X, QIN HL, LI J, et al. Infrared and visible image fusion using multiscale directional nonlocal means filter[J]. Applied Optics, 2015, 54(13): 4299-4308. doi:  10.1364/AO.54.004299
    [15] CUI G M, FENG H J, XU Z H, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. doi:  10.1016/j.optcom.2014.12.032
  • 加载中
图(4) / 表(2)
计量
  • 文章访问数:  133
  • HTML全文浏览量:  70
  • PDF下载量:  36
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-08-29
  • 修回日期:  2021-11-23
  • 刊出日期:  2022-05-20

目录

    /

    返回文章
    返回