Abstract:
The fundamental task of image fusion is to extract image features. Because of the channel differences between synthetic aperture radar (SAR) images and multispectral (MS) images, existing algorithms have difficulty in fully extracting and utilizing the high-frequency detail information of SAR images and low-frequency spectral information of multispectral images, and the fused images have problems with detail loss and spectral distortion. In this study, an image fusion algorithm based on dual-channel multiscale feature extraction and hybrid attention is proposed. First, a dual-channel network is used to extract multi-scale high-frequency detail features and low-frequency spectral features of SAR and multispectral images, and successively expand the perceptual field using dilated convolution with different void rates. The extracted features are then mapped to the hybrid attention module for feature enhancement, and these enhanced features are superimposed on the upsampled multispectral images. A loss function based on the spectral angular distance was also constructed, which could further alleviate the problems of detail loss and spectral distortion. Finally, the image is reconstructed using a decoding network to obtain a high-resolution fused image. The experimental results show that the proposed algorithm achieves the best performance and that the fused image maintains a good balance of details and spectra.