QI Yunfei, WU Jinhui, LIU Ji, ZHANG Boyang, TONG Xiuliang. Channel And Edge Enhanced Infrared-Visible Fusion NetworkJ. Infrared Technology .
Citation: QI Yunfei, WU Jinhui, LIU Ji, ZHANG Boyang, TONG Xiuliang. Channel And Edge Enhanced Infrared-Visible Fusion NetworkJ. Infrared Technology .

Channel And Edge Enhanced Infrared-Visible Fusion Network

  • To address the issues of insufficient complementarity of multimodal features and weakened semantic information in infrared and visible image fusion, this paper proposes a Channel-Edge Boost Attention Module (CEBAM). This module collaboratively optimizes feature fusion through a cross-layer channel attention mechanism and an edge-guided branch, and embeds it into the Gradient Residual Dense Block (GRDB) framework to construct a novel network called CEASeFusion. The CEBAM (Channel and Edge Boost Attention Module) adopts a lightweight design (grouped convolution + low-rank decomposition) to reduce computational overhead, and combines a joint loss function (content loss, edge consistency loss, semantic perception loss) to enhance the semantic perception ability of the fused image. Experiments on the MSRS dataset show that the fused images of CEASeFusion outperform mainstream methods in multiple objective and subjective indicators such as EN, MI, VIF, and EI. The mean Intersection over Union (mIoU) of semantic segmentation tested on the BiSeNet segmentation model is 5.6 percentage points higher than that of the baseline model SeAFusion. The inference speed reaches 20 FPS on an NVIDIA RTX 4060, balancing fusion quality and real-time performance. The fused images generated by this model significantly improve the saliency of targets and the preservation ability of texture details, and are particularly suitable for fields that require real-time and robust environmental perception, such as autonomous driving, traffic monitoring, night security, and drone inspection.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return