XING Yanchao, NIU Zhenhua. Infrared and Visible Image Fusion Based on Global Energy Features and Improved PCNN[J]. Infrared Technology , 2024, 46(8): 902-911.
Citation: XING Yanchao, NIU Zhenhua. Infrared and Visible Image Fusion Based on Global Energy Features and Improved PCNN[J]. Infrared Technology , 2024, 46(8): 902-911.

Infrared and Visible Image Fusion Based on Global Energy Features and Improved PCNN

More Information
  • Received Date: August 29, 2022
  • Revised Date: September 20, 2022
  • To improve the low clarity, low contrast, and insufficient texture details of infrared and visible image fusion, an image fusion algorithm based on a parameter-adaptive pulse-coupled neural network (PA-PCNN) was proposed. First, the source infrared image was dehazed by a dark channel to enhance the clarity of the image. Then, the source images were decomposed by non-subsampled shearlet transform (NSST), and the low-frequency coefficients were fused by the proposed global energy feature extraction algorithm combined with a modified spatial frequency adaptive weight. Texture energy was used as the external input of the PA-PCNN to fuse the high-frequency coefficients, and the fused gray image was obtained using the inverse NSST. To further enhance the perception of the human eye, a multiresolution color transfer algorithm was used to convert the grayscale image to a color image. The proposed method was compared with seven classical algorithms for two image pairs. The experimental results show that the proposed method is significantly better than the comparison algorithms in terms of evaluation indicators, and improves the clarity and detail information of the fused image, which verifies its effectiveness. The conversion of the fused grayscale images into pseudo-color images further enhances recognition and human eye perception.

  • [1]
    GONG J, WANG B, LIN Q, et al. Image fusion method based on improved NSCT transform and PCNN model[C]// 2016 9th International Symposium on Computational Intelligence and Design (ISCID) of IEEE, 2016, 1: 28-31.
    [2]
    LI J, LI B, JIANG Y. An infrared and visible image fusion algorithm based on LSWT-NSST[J]. IEEE Access, 2020, 8: 179857-179880. DOI: 10.1109/ACCESS.2020.3028088
    [3]
    沈英, 黄春红, 黄峰, 等. 红外与可见光图像融合技术的研究进展[J]. 红外与激光工程, 2021, 50(9): 20200467. doi: 10.3788/ IRLA20200467.

    SHEN Ying, HUANG Chunhong, HUANG Feng, et al. Research progress of infrared and visible image fusion technology[J]. Infrared and Laser Engineering, 2021, 50(9): 20200467. DOI: 10.3788/IRLA20200467.
    [4]
    李威, 李忠民. NSST域红外和可见光图像感知融合[J]. 激光与光电子学进展, 2021, 58(20): 2010014. . https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202120021.htm

    LI Wei, LI Zhongmin. NSST-bsed perception fusion method for infrared and visible images[J]. Laser & Optoelectronics Progress, 2021, 58(20): 2010014. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202120021.htm
    [5]
    YU Z A, YU L B, PENG S C, et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011
    [6]
    YU Z, BAI X, TAO W. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure[J]. Information Fusion, 2017, 35: 81-101. DOI: 10.1016/j.inffus.2016.09.006
    [7]
    MA J, WEI Y, LIANG P, et al. Fusion GAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
    [8]
    ZHAO Z, XU S, ZHANG C, et al. DIDFuse: deep image decomposition for infrared and visible image fusion[EB/OL] [2020]. https://ar5iv.labs.arxiv.org/html/2003.09210.
    [9]
    Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156. DOI: 10.1016/j.inffus.2006.02.001
    [10]
    YANG S, MIN W, JIAO L, et al. Image fusion based on a new contourlet packet[J]. Information Fusion, 2010, 11(2): 78-84. DOI: 10.1016/j.inffus.2009.05.001
    [11]
    杨孙运, 奚峥皓, 王汉东, 等. 基于NSCT和最小化-局部平均梯度的图像融合[J]. 红外技术, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6

    YANG Sunyun, XI Zhenghao, WANG Handong, et al. Image fusion based on NSCT and minimum-local mean gradient[J]. Infrared Technology, 2021, 43(1): 13-20. http://hwjs.nvir.cn/article/id/144252d1-978c-4c1e-85ad-e0b8d5e03bf6
    [12]
    叶坤涛, 李文, 舒蕾蕾, 等. 结合改进显著性检测与NSST的红外与可见光图像融合方法[J]. 红外技术, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805

    YE Kuntao, LI Wen, SHU Leilei, et al. Infrared and visible image fusion method based on improved saliency detection and non-subsampled shearlet transform[J]. Infrared Technology, 2021, 43(12): 1212-1221. http://hwjs.nvir.cn/article/id/bfd9f932-e0bd-4669-b698-b02d42e31805
    [13]
    LI S, KANG X, FANG L, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. DOI: 10.1016/j.inffus.2016.05.004
    [14]
    Johnson J L. Pulse-coupled neural nets: translation, rotation, scale, distortion, and intensity signal invariance for images[J]. Applied Optics, 1994, 33(26): 6239-6253. DOI: 10.1364/AO.33.006239
    [15]
    Eckhorn R, Reitboeck H, Arndt M, et al. Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex[J]. Neural Computation, 2014, 2(3): 293-307.
    [16]
    YIN M, LIU X, LIU Y, et al. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain[J]. IEEE Transactions on Instrumentation and Measurement, 2018, 68(1): 49-64.
    [17]
    高红霞, 魏涛. 改进PCNN与平均能量对比度的图像融合算法[J]. 红外与激光工程, 2022, 51(4): 20210996. doi: 10.3788/IRLA20210996.

    GAO Hongxia, WEI Tao. Image fusion algorithm based on improved PCNN and average energy contrast[J]. Infrared and Laser Engineering, 2022, 51(4): 20210996. DOI: 10.3788/IRLA20210996
    [18]
    黄陈建, 戴文战. 基于PCNN图像分割的医学图像融合算法[J]. 光电子⋅激光, 2022, 33(1): 37-44. https:

    HUANG Chenjian, DAI Wenzhan. Medical image fusion algorithm based on PCNN image segmentation[J]. Journal of Opto-electronics·Laser, 2022, 33(1): 37-44. https:
    [19]
    李致金, 顾鹏, 钱百青. 基于参数自适应PCNN和卷积稀疏的多聚焦图像融合[J]. 太赫兹科学与电子信息学报, 2021, 19(3): 471-477. https://www.cnki.com.cn/Article/CJFDTOTAL-XXYD202103020.htm

    LI Zhijin, GU Peng, QIAN Baiqing. Multi-focus image fusion based on parameter adaptive and convolutional sparse representation[J]. Journal of Terahertz Science and Electronic Information Technology, 2021, 19(3): 471-477. https://www.cnki.com.cn/Article/CJFDTOTAL-XXYD202103020.htm
    [20]
    HE K, JIAN S, Fellow, et al. Single image haze removal using dark channel prior[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011, 33(12): 2341-2353.
    [21]
    冯相辉. 一种改进的同态滤波图像增强算法[J]. 重庆邮电大学学报(自然科学版), 2020, 32(1): 138-145. https://www.cnki.com.cn/Article/CJFDTOTAL-CASH202001017.htm

    FENG Xianghui. An improved homomorphic filtering image enhancement algorithm[J]. Journal of Chongqing University of Posts and Telecommunications(Natural Science Edition), 2020, 32(1): 138-145. https://www.cnki.com.cn/Article/CJFDTOTAL-CASH202001017.htm
    [22]
    Ganasala P, Prasad A D. Functional and Anatomical Image Fusion based on Texture Energy Measures in NSST Domain[C]// 2020 First International Conference on Power, Control and Computing Technologies (ICPC2T) of IEEE, 2020: 417-420.
    [23]
    YU X, REN J, CHEN Q, et al. A false color image fusion method based on multi-resolution color transfer in normalization YCBCR space[J]. Optik-International Journal for Light and Electron Optics, 2014, 125(20): 6010-6016. DOI: 10.1016/j.ijleo.2014.07.059
    [24]
    CHEN Y, Park S K, MA Y, et al. A new automatic parameter setting method of a simplified PCNN for image segmentation[J]. IEEE Transactions on Neural Networks, 2011, 22(6): 880-892. DOI: 10.1109/TNN.2011.2128880
    [25]
    Toet A, Franken E M. Perceptual evaluation of different image fusion schemes[J]. Displays, 2003, 24(1): 25-37. DOI: 10.1016/S0141-9382(02)00069-0
    [26]
    LIU Y, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. DOI: 10.1016/j.inffus.2014.09.004
    [27]
    LI H, WU X J. Infrared and visible image fusion using latent low-rank representation[J/OL]. arXiv preprint arXiv: 1804.08992, 2018.
    [28]
    Kumar B. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform[J]. Signal, Image & Video Processing, 2013, 7: 1125-1143.
    [29]
    ZHOU Z, BO W, SUN L, et al. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Information Fusion, 2016, 30: 15-26. DOI: 10.1016/j.inffus.2015.11.003
    [30]
    [31]
    张小利, 李雄飞, 李军. 融合图像质量评价指标的相关性分析及性能评估[J]. 自动化学报, 2014, 40(2): 306-315. https://www.cnki.com.cn/Article/CJFDTOTAL-MOTO201402014.htm

    ZHANG Xiaoli, LI Xiongfei, LI Jun. Validation and correlation analysis of metrics for evaluating performance of image fusion[J]. Acta Automatica Sinica, 2014, 40(2): 306-315. https://www.cnki.com.cn/Article/CJFDTOTAL-MOTO201402014.htm
  • Related Articles

    [1]LIAO Guangfeng, GUAN Zhiwei, CHEN Qiang. An Improved Dual Discriminator Generative Adversarial Network Algorithm for Infrared and Visible Image Fusion[J]. Infrared Technology , 2025, 47(3): 367-375.
    [2]DAI Yueming, YANG Lufeng, TONG Xiongmin. Real-time Section State Verification Method of Energy Management System Low Voltage Equipment Based on Infrared Image and Deep Learning[J]. Infrared Technology , 2024, 46(12): 1464-1470.
    [3]CHEN Haipeng, JIN Weiqi, LI Li, QIU Su, YU Xiangzhi. Study on BRDF Scattering Characteristics of Relay Wall in Non-Line-of-Sight Imaging Based on Time-gated SPAD Array[J]. Infrared Technology , 2024, 46(11): 1225-1234.
    [4]ZHONG Guoli, LIAO Shouyi, YANG Xinjie. Real-Time Infrared Image Generation of Battlefield Environment Based on JRM[J]. Infrared Technology , 2024, 46(2): 183-189.
    [5]SHEN Ji, NA Qiyue, XU Jiandong, CHANG Weijing, ZHANG Wei, JIAN Yunfei. 640×512 Frame Transfer EMCCD Camera Timing Sequence Design[J]. Infrared Technology , 2023, 45(5): 548-552.
    [6]WANG Mingxing, ZHENG Fu, WANG Yanqiu, SUN Zhibin. Time-of-Flight Point Cloud Denoising Method Based on Confidence Level[J]. Infrared Technology , 2022, 44(5): 513-520.
    [7]LIU Zhaoqing, LI Li, DONG Bing, JIN Weiqi. Shack-Hartman Detector Real-time Wavefront Processor Based on FPGA[J]. Infrared Technology , 2021, 43(8): 717-722.
    [8]CHEN Zheng, FU Kuisheng, DING Haishan. Analysis of the Influence of Installation Errors of an Infrared Stabilized Platform on Line-of-sight Angular Velocity[J]. Infrared Technology , 2021, 43(2): 110-115.
    [9]WEI Jiali, QU Huidong, WANG Yongxian, ZHU Junqing, GUAN Yingjun. Research Review of 3D Cameras Based on Time-of-Flight Method[J]. Infrared Technology , 2021, 43(1): 60-67.
    [10]HUANG Minshuang, GUAN Zaihui, JIANG Bo. Pulse Laser Ranging Using Sinusoidal Amplitude Time Conversion[J]. Infrared Technology , 2020, 42(5): 483-487.
  • Cited by

    Periodical cited type(0)

    Other cited types(3)

Catalog

    Article views (100) PDF downloads (33) Cited by(3)
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return