李秋恒, 邓豪, 刘桂华, 庞忠祥, 唐雪, 赵俊琴, 卢梦圆. 基于多尺度及多头注意力的红外与可见光图像融合[J]. 红外技术, 2024, 46(7): 765-774.
引用本文: 李秋恒, 邓豪, 刘桂华, 庞忠祥, 唐雪, 赵俊琴, 卢梦圆. 基于多尺度及多头注意力的红外与可见光图像融合[J]. 红外技术, 2024, 46(7): 765-774.
LI Qiuheng, DENG Hao, LIU Guihua, PANG Zhongxiang, TANG Xue, ZHAO Junqin, LU Mengyuan. Infrared and Visible Images Fusion Method Based on Multi-Scale Features and Multi-head Attention[J]. Infrared Technology , 2024, 46(7): 765-774.
Citation: LI Qiuheng, DENG Hao, LIU Guihua, PANG Zhongxiang, TANG Xue, ZHAO Junqin, LU Mengyuan. Infrared and Visible Images Fusion Method Based on Multi-Scale Features and Multi-head Attention[J]. Infrared Technology , 2024, 46(7): 765-774.

基于多尺度及多头注意力的红外与可见光图像融合

Infrared and Visible Images Fusion Method Based on Multi-Scale Features and Multi-head Attention

  • 摘要: 针对红外与可见光图像融合容易出现细节丢失,且现有的融合策略难以平衡视觉细节特征和红外目标特征等问题,提出一种基于多尺度特征融合与高效多头自注意力相结合的红外与可见光图像融合方法。首先,为提高目标与场景的描述能力,采用了多尺度编码网络提取源图像不同尺度的特征;其次,提出了基于Transformer的多头转置注意力结合残差密集块的融合策略以平衡融合细节与整体结构;最后,将多尺度特征融合图输入基于巢式连接的解码网络,重建具有显著红外目标和丰富细节信息的融合图像。基于TNO与M3FD公开数据集与7种经典融合方法进行实验,结果表明,本文方法在视觉效果与量化评价指标上表现更佳,生成的融合图像在目标检测任务上取得更好的效果。

     

    Abstract: To address the challenges of detail loss and the imbalance between visual detail features and infrared (IR) target features in fused infrared and visible images, this study proposes a fusion method combining multiscale feature fusion and efficient multi-head self-attention (EMSA). The method includes several key steps. 1) Multiscale coding network: It utilizes a multiscale coding network to extract multilevel features, enhancing the descriptive capability of the scene. 2) Fusion strategy: It combines transformer-based EMSA with dense residual blocks to address the imbalance between local details and overall structure in the fusion process. 3) Nested-connection based decoding network: It takes the multilevel fusion map and feeds it into a nested-connection based decoding network to reconstruct the fused result, emphasizing prominent IR targets and rich scene details. Extensive experiments on the TNO and M3FD public datasets demonstrate the efficacy of the proposed method. It achieves superior results in both quantitative metrics and visual comparisons. Specifically, the proposed method excels in targeted detection tasks, demonstrating state-of-the-art performance. This approach not only enhances the fusion quality by effectively preserving detailed information and balancing visual and IR features but also establishes a benchmark in the field of infrared and visible image fusion.

     

/

返回文章
返回