Abstract:
Traditional infrared and visible fused images suffer from missing details and blurred targets owing to single features in complex environments. This study presents a method for fusing infrared and visible images based on a convolution neural network(CNN) combined with a non-subsampled contourlet transform (NSCT). Firstly, the infrared and visible target feature information is extracted by CNN, and the source image is decomposed by the NSCT at multiple scales to obtain its high-frequency coefficients and low-frequency coefficients. Secondly, the high-frequency sub-bands and low-frequency sub-bands of the source image are fused separately using adaptive fuzzy logic and local variance contrast in combination with the target feature image. Finally, the fused image is obtained by inverse NSCT transformation. We conducted a comparative analysis with five other traditional algorithms. The experimental results show that the proposed method performs better in several objective evaluation indicators.