基于快速梯度域引导滤波的红外与可见光图像融合

Fusion of Infrared and Visible Images Based on Fast Gradient Domain Guided Filtering

  • 摘要: 为解决红外与可见光图像在传统多尺度域融合规则下容易损失图像边缘细节信息的问题,本文提出了一种快速梯度域引导滤波器(fast gradient domain guided filter,FGDGF)的红外与可见光图像融合方法。在梯度域引导滤波器(gradient domain guided filter,GDGF)的基础上,FGDGF方法通过调整滤波尺度来保留图像的结构信息和去噪能力,同时有效保持大梯度边缘信息,从而实现优异的边缘保持性能,并提高运行效率。首先,运用快速梯度域引导滤波对输入源图像进行分解;接着,使用基于视觉显著性映射(visual saliency mapping,VSM)的加权融合规则进行融合,得到基础层融合图像;运用优化参数的自适应脉冲耦合神经网络(pluse coupled neural network, PCNN)融合规则进行融合,获得细节层融合图像;最后,通过重构来获得最终的融合图像。通过实验验证,本文方法的客观评价指标平均梯度、相关系数、信息熵、空间频率和标准差分别平均提升了28.6%、14.9%、8.9%、32.6%、11.4%,不仅较好地保留了源图像的边缘和纹理等信息,而且在视觉效果和运行效率上也得到了提升。

     

    Abstract: To address the issue of edge information loss in infrared and visible light image fusion under traditional multi-scale domain fusion rules, this study proposes an infrared and visible light image fusion method based on a Fast Gradient Domain Guided Filter (FGDGF). Building upon the Gradient Domain Guided Filter (GDGF), the FGDGF method adjusts the filter scale to preserve the structural information and denoising capabilities of the image while effectively maintaining large-gradient edge information, thereby achieving superior edge preservation performance and improved computational efficiency. First, the input source images are decomposed using a fast gradient domain guided filtering approach. Subsequently, a weighted fusion rule based on Visual Saliency Mapping (VSM) is applied to generate the base layer fusion image. For the detail layer, an adaptive Pulse Coupled Neural Network (PCNN) fusion rule with optimized parameters is employed. Finally, the final fusion image is reconstructed by combining the base and detail layers. Experimental validation indicates that the proposed method achieves notable improvements in objective evaluation metrics. Specifically, the average gradient, correlation coefficient, information entropy, spatial frequency, and standard deviation are enhanced by 28.6%, 14.9%, 8.9%, 32.6%, and 11.4% on average, respectively. These results indicate that the method not only effectively preserves edge and texture information from the source images but also enhances visual quality and operational efficiency.

     

/

返回文章
返回