一种红外和可见光融合图像的道路目标检测方法

Road Object Detection Method Based on Infrared and Visible Image Fusion

  • 摘要: 为了实现在自动驾驶过程中目标密集和背景复杂场景中的道路目标检测,提出了一种改进YOLOv7tiny的检测算法。首先,使用SiLU激活函数代替原网络中的Leaky ReLU来增强模型特征提取能力;其次,增加小目标检测层以提高检测精度,并通过引入感受野增强模块来捕捉多尺度信息;最后,将主干网络改进为基于密集通道压缩的特征凝聚结构,提高正向传播特征的纯度,同时增强了反向传播中的梯度流。结合不同成像方式的优点,在红外和可见光图像融合数据集上进行实验,结果显示:改进算法的F1分数为80.6、mAP@50为84%、mAP@50-95为51.2%,相对基线模型分别提升了3%、4.6%和3.6%。改进算法能够有效地提高道路目标检测精度,改善漏检和误检问题。

     

    Abstract: To realize road target detection in scenes with dense targets and complex backgrounds during autonomous driving, an improved YOLOv7 tiny detection algorithm is proposed. First, the SiLU activation function was used to replace the Leaky ReLU in the original network to enhance the feature extraction ability of the model. Second, a small object detection layer was added to improve the detection accuracy, and multi-scale information was captured by introducing a Receptive Field Enhancement (RFE) module. Finally, the backbone network was improved to a Dense Channel Compression for Feature Spatial Solidification Structure (DCFS) to improve the purity of the forward propagation features. Simultaneously, the gradient flow in the backpropagation was enhanced. By combining the advantages of different imaging modalities, experiments were conducted on infrared and visible image fusion datasets. The results showed that the F1 score of the improved algorithm was 80.6, mAP@50 was 84%, and mAP@50-95 was 51.2%, that were increased by 3%, 4.6%, and 3.6%, respectively. The improved algorithm could effectively improve the accuracy of road target detection and solve the missed and false detection problems.

     

/

返回文章
返回