邸敬, 梁婵, 任莉, 郭文庆, 廉敬. 基于多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合[J]. 红外技术, 2024, 46(7): 754-764.
引用本文: 邸敬, 梁婵, 任莉, 郭文庆, 廉敬. 基于多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合[J]. 红外技术, 2024, 46(7): 754-764.
DI Jing, LIANG Chan, REN Li, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Multi-Scale Contrast Enhancement and Cross-Dimensional Interactive Attention Mechanism[J]. Infrared Technology , 2024, 46(7): 754-764.
Citation: DI Jing, LIANG Chan, REN Li, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Multi-Scale Contrast Enhancement and Cross-Dimensional Interactive Attention Mechanism[J]. Infrared Technology , 2024, 46(7): 754-764.

基于多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合

Infrared and Visible Image Fusion Based on Multi-Scale Contrast Enhancement and Cross-Dimensional Interactive Attention Mechanism

  • 摘要: 针对目前红外与可见光图像融合存在特征提取不足、融合图像目标区域不显著、细节信息缺失等问题,提出了一种多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合方法。首先,设计了多尺度对比度增强模块,以增强目标区域强度信息利于互补信息的融合;其次,采用密集连接块进行特征提取,减少信息损失最大限度利用信息;接着,设计了一种跨维度交互注意力机制,有助于捕捉关键信息,从而提升网络性能;最后,设计了从融合图像到源图像的分解网络使融合图像包含更多的场景细节和更丰富的纹理细节。在TNO数据集上对提出的融合框架进行了评估实验,实验结果表明本文方法所得融合图像目标区域显著,细节纹理丰富,具有更优的融合性能和更强的泛化能力,主观性能和客观评价优于其他对比方法。

     

    Abstract: Addressing the issues of inadequate feature extraction, lack of saliency in fused image regions, and missing detailed information in infrared-visible image fusion, this paper proposes a method for infrared-visible image fusion based on multi-scale contrast enhancement and a cross-modal interactive attention mechanism. The main components of the proposed method are as follows. 1) Multi-scale contrast enhancement module: Designed to strengthen the intensity information of target regions, facilitating the fusion of complementary information from both infrared and visible images. 2) Dense connection block: Employed for feature extraction to minimize information loss and maximize information utilization. 3) Cross-modal interactive attention mechanism: Developed to capture crucial information from both modalities and enhance the performance of the network. 4) Decomposition network: Designed to decompose the fused image back into source images, incorporating more scene details and richer texture information into the fused image. The proposed fusion framework was experimentally evaluated on the TNO dataset. The results show that the fused images obtained by this method feature significant target regions, rich detailed textures, better fusion performance, and stronger generalization ability. Additionally, the proposed method outperforms other compared algorithms in both subjective performance and objective evaluation.

     

/

返回文章
返回