留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于边缘感知的深度神经网络红外装甲目标检测

盛大俊 张强

盛大俊, 张强. 基于边缘感知的深度神经网络红外装甲目标检测[J]. 红外技术, 2021, 43(8): 784-791.
引用本文: 盛大俊, 张强. 基于边缘感知的深度神经网络红外装甲目标检测[J]. 红外技术, 2021, 43(8): 784-791.
SHENG Dajun, ZHANG Qiang. Infrared Armored Target Detection Based on Edge-perception in Deep Neural Network[J]. Infrared Technology , 2021, 43(8): 784-791.
Citation: SHENG Dajun, ZHANG Qiang. Infrared Armored Target Detection Based on Edge-perception in Deep Neural Network[J]. Infrared Technology , 2021, 43(8): 784-791.

基于边缘感知的深度神经网络红外装甲目标检测

基金项目: 装备预研基金资助课题
详细信息
    作者简介:

    盛大俊(1981-),男,讲师,研究方向:制导测试技术,计算机视觉应用

    通讯作者:

    张强(1975-),男,高工,研究方向:光电系统维修与测试,红外制导技术等,E-mail:x0376y@163.com

  • 中图分类号: TP753

Infrared Armored Target Detection Based on Edge-perception in Deep Neural Network

  • 摘要: 装甲目标自动检测一直是红外制导领域的研究热点与难点。解决该问题的传统方法是提取目标的低层次特征,并对特征分类器进行训练。然而,由于传统的检测算法不能覆盖所有的目标模式,在实际应用中的检测性能受到限制。本文受边缘感知模型的启发,提出了一种基于边缘感知的改进深度网络,该网络是通过边缘感知融合模块提升装甲轮廓精度,利用特征提取模块和上下文聚合模块的优势,能够更好地适应目标的形态变化,具有较高的检测与识别的精度。验证结果表明,本文提出的装甲检测网络模型可以有效地提高红外图像中装甲的检测与定位精度。
  • 图  1  基于边缘感知的目标检测模型框架

    Figure  1.  Model frameworks of object detection based on edge-aware

    图  2  上下文聚合模块

    Figure  2.  Context aggregation module

    图  3  边缘感知融合模块

    Figure  3.  Edge-aware fusion module

    图  4  检测率与FPPI的关系曲线

    Figure  4.  Relationship-curves between detection rate and FPPI

    图  5  不同对比算法的性能对比

    Figure  5.  Performance comparison for different detection algorithms

    图  6  不同对比算法对不同红外图像的装甲检测的结果,其中(a)-(f)分别代表不同图像

    Figure  6.  Results of different comparison algorithms for different images, where (a)-(f) respectively represent the different images

    表  1  装甲检测结果统计

    Table  1.   Statistics of detection results

    Classification Result
    Positive TP(True Positive) FP(False Positive)
    Negative FN(False Negative) TN(True Negative)
    下载: 导出CSV

    表  2  不同测试子集的特点

    Table  2.   Characteristics of different test subsets

    Subset Characteristic
    A The target is fuzzy, the contrast is low, and some turrets are covered.
    B The armored target is small, and there are many bright areas in the background.
    C The armor area is large and full of the whole infrared image.
    D The armor boundary is not obvious and the background is super saturation.
    E The noise is very large and the internal details of the target are uneven.
    F The armored target is fuzzy and partially occlused, and some caterpillar bands are coved by weeds.
    下载: 导出CSV

    表  3  不同测试数据集下检测精度

    Table  3.   Detection Rates under Different Testing Data Sets

    Datasets Models
    SDD DenseNet ResNet ConvNet YOLO-v2 Proposed
    A 73.32% 84.34% 86.61% 85.90% 89.15% 90.06%
    B 62.41% 65.43% 72.15% 77.00% 77.59% 77.14%
    C 69.12% 73.22% 74.35% 72.24% 77.09% 78.07%
    D 78.28% 87.08% 89.40% 86.01% 88.36% 89.44%
    E 51.91% 52.90% 59.75% 59.11% 58.69% 59.58%
    F 48.65% 51.55% 60.11% 52.17% 60.20 60.21%
    下载: 导出CSV
  • [1] 陈国胜, 胡福东, 周成宝, 等. 基于BIRD网络的智能红外全景识别系统[J]. 红外技术, 2018, 40(8): 765-770. http://hwjs.nvir.cn/article/id/hwjs201808008

    CHEN Guosheng, HU Fudong, ZHOU Chenbao, et al. Intelligent infrared panoramic recognition system based on BIRD network[J]. Infrared Technology, 2018, 40(8): 765-770. http://hwjs.nvir.cn/article/id/hwjs201808008
    [2] 刘博文, 戴永寿, 金久才, 等. 基于空间分布与统计特性的海面远景目标检测方法[J]. 海洋科学, 2018, 42(1): 88-92. doi:  10.3969/j.issn.1671-6647.2018.01.008

    LIU B, DAI Y, JIN J, et al. Long-range object detection on sea surface based on spatial distribution and statistical characteristics[J]. Marine Science, 2018, 42 (1): 88-92 doi:  10.3969/j.issn.1671-6647.2018.01.008
    [3] 骆清国, 赵耀, 俞长贺, 等. 某型装甲车辆红外辐射信号的建模与仿真[J]. 装甲兵工程学院学报, 2019, 33(1): 31-36. doi:  10.3969/j.issn.1672-1497.2019.01.006

    LUO Q, ZHAO Y, YUC, et al. Modeling and simulation of infrared radiation signal of an armored vehicle[J]. Journal of Armored Force Engineering College, 2019, 33(1): 31-36. doi:  10.3969/j.issn.1672-1497.2019.01.006
    [4] DONG X, HUANG X, ZHENG Y, et al. A novel infrared small moving target detection method based on tracking interest points under complicated background[J]. Infrared Phys. Technol., 2014, 65: 36-42. doi:  10.1016/j.infrared.2014.03.007
    [5] HE Y, LI M, ZHANG J, et al. Small infrared target detection based on low-rank and sparse representation[J]. Infrared Phys. Technol. , 2015, 68: 98-109. doi:  10.1016/j.infrared.2014.10.022
    [6] GUO J, Hsia C, LIU Y, et al. Fast background subtraction based on a multilayer codebook model for moving object detection[C]//IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(10): 1809-1821.
    [7] 彭博, 臧笛. 基于深度学习的红外车辆识识别方法研究[J]. 计算机科学, 2015, 42(4): 268-273.

    PENG Bo, ZANG Di. Vehicle logo recognition based on deep learning[J]. Computer Science, 2015, 42(4): 268-273.
    [8] XIA K J, CHENG J, TAO D, et al. Liver detection algorithm based on an improved deep network combined with edge perception[J]. IEEE ACCESS, 2019, 12: 175135-17514. http://ieeexplore.ieee.org/document/8901142
    [9] Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6517-6525.
    [10] YU Z, FENG C, LIU M Y, et al. CASENet: Deep category-aware semantic edge detection[C]// Computer Vision & Pattern Recognition of IEEE, 2017: 1761-1770(doi: 10.1109/CVPR.2017.191).
    [11] Neubeck A, Gool L V. Efficient non-maximum suppression[C]// International Conference on Pattern Recognition, IEEE Computer Society, 2006: 850-855.
    [12] 焦安波, 何淼, 罗海波. 一种改进的HED网络及其在边缘检测中的应用[J]. 红外技术, 2019, 41(1): 72-77. http://hwjs.nvir.cn/article/id/hwjs201901011

    JIAO Anbo, HE Miao, LUO Haibo. Research on significant edge detection of infrared image based on deep learning[J]. Infrared Technology, 2019, 41(1): 72-77. http://hwjs.nvir.cn/article/id/hwjs201901011
    [13] 杨眷玉. 基于卷积神经网络的物体识别研究与实现[D]. 西安: 西安电子科技大学, 2016.

    YANG J. Research and Implementation of Object Detection Based on Convolutional Neural Networks[D]. Xi'an: Xidian University, 2016.
    [14] 丁文秀, 孙悦, 闫晓星. 基于分层深度学习的鲁棒行人分类[J]. 光电工程, 2015, 42(9): 21-27. https://www.cnki.com.cn/Article/CJFDTOTAL-GDGC201509005.htm

    DING W, SUN Y, YAN Xiaoxing. Robust pedestrian classification based on hierarchical deep learning[J]. Opto-Electronic Engineering, 2015, 42(9): 21-27. https://www.cnki.com.cn/Article/CJFDTOTAL-GDGC201509005.htm
    [15] CHEN S, CHEN Z, XU X, et al. Nv-Net: efficient infrared image segmentation with convolutional neural networks in the low illumination environment[J]. Infrared Physics & Technology, 2020, 105: 103184. http://www.sciencedirect.com/science/article/pii/S1350449519310357
    [16] HOSANG J, BENENSON R, SCHIELE B. A convnet for non-maximum suppression[C]//German Conference on Pattern Recognition, Cham: Springer, 2016: 192-204.
    [17] REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York: IEEE Press, 2017: 7263-7271.
    [18] LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]//European Conference on Computer Vision. Cham: Springer, 2016: 21-37.
    [19] HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4700-4708.
    [20] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
  • 加载中
图(6) / 表(3)
计量
  • 文章访问数:  279
  • HTML全文浏览量:  108
  • PDF下载量:  52
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-10
  • 修回日期:  2020-09-03
  • 刊出日期:  2021-08-20

目录

    /

    返回文章
    返回