Abstract:
Most existing infrared and visible image fusion methods are designed for well-lit environments, resulting in issues such as low contrast, loss of textural details, and color distortion when applied to nighttime scenarios. To address this issue, this study proposes a nighttime infrared and visible image fusion method based on adaptive enhancement loss and multi-scale dilated convolution. First, an illumination adjustment network was designed using adaptive enhancement loss, which guides the adaptive enhancement curve to enhance the brightness and color information of nighttime visible images. Second, a multi-scale dilated convolution module was designed for the dual-branch feature-extraction structure of the fusion network to better capture multi-scale features and contextual information. Finally, a joint training approach was used to train both the illumination adjustment and fusion networks, achieving coupling of the underlying feature representations for both the enhancement and fusion tasks. Comparative experiments were conducted on the LLVIP, MSRS, and TNO datasets using seven representative fusion algorithms. The experimental results show that the proposed method outperformed current mainstream algorithms in both subjective and objective evaluations, significantly improving the contrast and clarity of the fused images in low-light scenarios while preserving more textural details and color information.