红外与可见光图像注意力生成对抗融合方法研究

Infrared and Visible Image Fusion Using Attention- Based Generative Adversarial Networks

  • 摘要: 目前,基于深度学习的融合方法依赖卷积核提取局部特征,而单尺度网络、卷积核大小以及网络深度的限制无法满足图像的多尺度与全局特性。为此,本文提出了红外与可见光图像注意力生成对抗融合方法。该方法采用编码器和解码器构成的生成器以及两个判别器。在编码器中设计了多尺度模块与通道自注意力机制,可以有效提取多尺度特征,并建立特征通道长距离依赖关系,增强了多尺度特征的全局特性。此外,构建了两个判别器,以建立生成图像与源图像之间的对抗关系,保留更多细节信息。实验结果表明,本文方法在主客观评价上都优于其他典型方法。

     

    Abstract: At present, deep learning-based fusion methods rely only on convolutional kernels to extract local features, but the limitations of single-scale networks, convolutional kernel size, and network depth cannot provide a sufficient number of multi-scale and global image characteristics. Therefore, here we propose an infrared and visible image fusion method using attention-based generative adversarial networks. This study uses a generator consisting of an encoder and decoder, and two discriminators. The multi-scale module and channel self-attention mechanism are designed in the encoder, which can effectively extract multi-scale features and establish the dependency between the long ranges of feature channels, thus enhancing the global characteristics of multi-scale features. In addition, two discriminators are constructed to establish an adversarial relationship between the fused image and the source images to preserve more detailed information. The experimental results demonstrate that the proposed method is superior to other typical methods in both subjective and objective evaluations.

     

/

返回文章
返回