邸敬, 任莉, 刘冀钊, 郭文庆, 廉敬. 基于三分支对抗学习和补偿注意力的红外和可见光图像融合[J]. 红外技术, 2024, 46(5): 510-521.
引用本文: 邸敬, 任莉, 刘冀钊, 郭文庆, 廉敬. 基于三分支对抗学习和补偿注意力的红外和可见光图像融合[J]. 红外技术, 2024, 46(5): 510-521.
DI Jing, REN Li, LIU Jizhao, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism[J]. Infrared Technology , 2024, 46(5): 510-521.
Citation: DI Jing, REN Li, LIU Jizhao, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism[J]. Infrared Technology , 2024, 46(5): 510-521.

基于三分支对抗学习和补偿注意力的红外和可见光图像融合

Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism

  • 摘要: 针对现有深度学习图像融合方法依赖卷积提取特征,并未考虑源图像全局特征,融合结果容易产生纹理模糊、对比度低等问题,本文提出一种基于三分支对抗学习和补偿注意力的红外和可见光图像融合方法。首先,生成器网络采用密集块和补偿注意力机制构建局部-全局三分支提取特征信息。然后,利用通道特征和空间特征变化构建补偿注意力机制提取全局信息,更进一步提取红外目标和可见光细节表征。其次,设计聚焦双对抗鉴别器,以确定融合结果和源图像之间的相似分布。最后,选用公开数据集TNO和RoadScene进行实验并与其他9种具有代表性的图像融合方法进行对比,本文提出的方法不仅获得纹理细节更清晰、对比度更好的融合结果,而且客观度量指标优于其他先进方法。

     

    Abstract: The existing deep learning image fusion methods rely on convolution to extract features and do not consider the global features of the source image. Moreover, the fusion results are prone to texture blurring, low contrast, etc. Therefore, this study proposes an infrared and visible image fusion method with adversarial learning and compensated attention. First, the generator network uses dense blocks and the compensated attention mechanism to construct three local-global branches to extract feature information. The compensated attention mechanism is then constructed using channel features and spatial feature variations to extract global information, infrared targets, and visible light detail representations. Subsequently, a focusing dual-adversarial discriminator is designed to determine the similarity distribution between the fusion result and source image. Finally, the public dataset TNO and RoadScene are selected for the experiments and compared with nine representative image fusion methods. The method proposed in this study not only obtains fusion results with clearer texture details and better contrast, but also outperforms other advanced methods in terms of the objective metrics.

     

/

返回文章
返回