基于自适应增强损失和多尺度空洞卷积的夜间红外与可见光图像融合

Nighttime Infrared and Visible Image Fusion Based on Adaptive Enhancement Loss and Multi-Scale Dilated Convolution

  • 摘要: 现有的红外与可见光图像融合方法大多仅针对光照充足场景,而夜间场景下的融合结果存在低对比度、纹理细节缺失、颜色失真等问题,为此本文提出了一种基于自适应增强损失和多尺度空洞卷积的夜间红外与可见光图像融合方法。首先,设计了基于自适应增强损失的光照调节网络,通过自适应增强损失引导光增强曲线自适应地增强夜间可见光图像的亮度和颜色信息。其次,设计了多尺度空洞卷积模块用于融合网络的双分支特征提取结构,以更好地捕获多尺度特征和上下文信息。最后,采用联合训练的方式优化光照调节网络和融合网络,实现了增强和融合任务底层特征表示的耦合。分别在LLVIP、MSRS、TNO数据集上与7个具有代表性的融合算法进行对比实验,实验结果表明,本文所提出方法在主客观评价上均优于当前主流算法,尤其在低光场景下能够显著提高融合图像的对比度和清晰度,保留更多的纹理细节和颜色信息。

     

    Abstract: Most existing infrared and visible image fusion methods are designed for well-lit environments, resulting in issues such as low contrast, loss of textural details, and color distortion when applied to nighttime scenarios. To address this issue, this study proposes a nighttime infrared and visible image fusion method based on adaptive enhancement loss and multi-scale dilated convolution. First, an illumination adjustment network was designed using adaptive enhancement loss, which guides the adaptive enhancement curve to enhance the brightness and color information of nighttime visible images. Second, a multi-scale dilated convolution module was designed for the dual-branch feature-extraction structure of the fusion network to better capture multi-scale features and contextual information. Finally, a joint training approach was used to train both the illumination adjustment and fusion networks, achieving coupling of the underlying feature representations for both the enhancement and fusion tasks. Comparative experiments were conducted on the LLVIP, MSRS, and TNO datasets using seven representative fusion algorithms. The experimental results show that the proposed method outperformed current mainstream algorithms in both subjective and objective evaluations, significantly improving the contrast and clarity of the fused images in low-light scenarios while preserving more textural details and color information.

     

/

返回文章
返回