基于 FP-YOLO 的舰船目标细粒度分类识别

Fine-grained Classification Recognition of Ship Targets based on FP-YOLO

  • 摘要: 针对当前舰船目标分类识别任务中细粒度分类程度不足和精确定位性能欠佳的问题,提出一种改进的目标识别算法FP-YOLO。该算法在骨干网和颈部网中分别引入了新的跨阶段瓶颈模块D2f和G2f,增强了骨干网的细节特征提取能力和颈部网的全局信息交互能力。此外,算法设计了BNFM模块用于高效融合同层次特征图,优化了骨干网和颈部网特征的互补性,进一步提升了神经网络的泛化能力。最后,引入了滑动损失函数,使模型能够更充分地利用难分类样本进行训练。实验结果表明:该算法在自建水平视角细粒度舰船目标分类数据集上mAP@0.95达到66.9%,较YOLOv8n、YOLOv10n分别提升2.5、5.4个百分点;在公开遥感细粒度舰船目标分类数据集上mAP@0.5:0.95达57.2%,较二者分别提升3.4和7.0个百分点。

     

    Abstract: To address the insufficient fine-grained classification capability and suboptimal localization accuracy in current ship target recognition tasks, this paper proposes an improved YOLO-based algorithm named fine-grained classification and precise positioning YOLO (FP-YOLO). The algorithm introduces novel cross stage partial bottleneck modules (D2f and G2f) into the backbone and neck networks, respectively, enhancing both the backbone's detail feature extraction capability and the neck's global information interaction capacity. Furthermore, a backbone-neck feature fusion module (BNFM) is designed to effectively integrate the same-level feature maps, optimizing the complementary characteristics between the backbone and neck features while improving the neural network's generalization ability. Additionally, a sliding loss function is introduced to better utilize hard-classified samples during training. Experimental results demonstrate that the proposed algorithm achieves 66.9% mAP@0.95 on a custom-built horizontal fine-grained ship dataset, outperforming YOLOv8n and YOLOv10n by 2.5 and 5.4 percentage points, respectively. On a public remote sensing fine-grained ship dataset, it reaches 57.2% mAP@0.5:0.95, improving by 3.4 and 7.0 percentage points compared to the above two models.

     

/

返回文章
返回