Abstract:
The fusion of infrared and visible images plays an important role in video surveillance, target tracking, etc. To obtain better fusion results for images, this study proposes a novel method combining deep learning and image decomposition based on a robust low-rank representation. First, robust principal component analysis is used to denoise the training set images. Next, rapid latent low rank representation is used to learn a sparse matrix to extract salient features and decompose the source images into low-frequency and high-frequency images. The low-frequency components are then fused using an adaptive weighting strategy, and the high-frequency components are fused by a VGG-19 network. Finally, the new low-frequency image is superimposed with the new high-frequency image to obtain a fused image. Experimental results demonstrate that this method has advantages in terms of both the subjective and objective evaluation of image fusion.