Fusion of Infrared and Visible Images Based on Fast Gradient Domain Guided Filtering
-
Abstract
To address the issue of edge information loss in infrared and visible light image fusion under traditional multi-scale domain fusion rules, this study proposes an infrared and visible light image fusion method based on a Fast Gradient Domain Guided Filter (FGDGF). Building upon the Gradient Domain Guided Filter (GDGF), the FGDGF method adjusts the filter scale to preserve the structural information and denoising capabilities of the image while effectively maintaining large-gradient edge information, thereby achieving superior edge preservation performance and improved computational efficiency. First, the input source images are decomposed using a fast gradient domain guided filtering approach. Subsequently, a weighted fusion rule based on Visual Saliency Mapping (VSM) is applied to generate the base layer fusion image. For the detail layer, an adaptive Pulse Coupled Neural Network (PCNN) fusion rule with optimized parameters is employed. Finally, the final fusion image is reconstructed by combining the base and detail layers. Experimental validation indicates that the proposed method achieves notable improvements in objective evaluation metrics. Specifically, the average gradient, correlation coefficient, information entropy, spatial frequency, and standard deviation are enhanced by 28.6%, 14.9%, 8.9%, 32.6%, and 11.4% on average, respectively. These results indicate that the method not only effectively preserves edge and texture information from the source images but also enhances visual quality and operational efficiency.
-
-