Abstract:
Mainstream fusion methods based on deep learning employ a convolutional operation to extract local image features; however, the interaction between an image and convolution kernel is content-independent, and the long-range dependency cannot be well modeled. Consequently, the loss of important contextual information may be unavoidable and further limit the fusion performance of infrared and visible images. To this end, we present a simple and effective fusion network for infrared and visible images, namely, the multiscale transformer fusion method (MsTFusion). We first designed a novel Conv Swin Transformer block to model long-range dependency. A convolutional layer was used to improve the representative ability of the global features. Subsequently, we constructed a multiscale self-attentional encoding-decoding network to extract and reconstruct global features without the help of local features. Moreover, we designed a learnable fusion layer for feature sequences that employed softmax operations to calculate the attention weight of the feature sequences and highlight the salient features of the source image. The proposed method is an end-to-end model that uses a fully attentional model to interact with image content and attention weights. We conducted a series of experiments on TNO and road scene datasets, and the experimental results demonstrated that the proposed MsTFusion transcended other methods in terms of subjective visual observations and objective indicator comparisons. By integrating the self-attention mechanism, our method built a fully attentional fusion model for infrared and visible image fusion and modeled the long-range dependency for global feature extraction and reconstruction to overcome the limitations of deep learning-based models. Compared with other state-of-the-art traditional and deep learning methods, MsTFusion achieved remarkable fusion performance with strong generalization ability and competitive computational efficiency.