XIA Yan. Research on 3D Target Recognition Algorithm Based on Infrared Features[J]. Infrared Technology , 2022, 44(11): 1161-1166.
Citation: XIA Yan. Research on 3D Target Recognition Algorithm Based on Infrared Features[J]. Infrared Technology , 2022, 44(11): 1161-1166.

Research on 3D Target Recognition Algorithm Based on Infrared Features

More Information
  • Received Date: April 11, 2022
  • Revised Date: July 27, 2022
  • Target recognition based on 3D features has problems, such as easy misjudgment in similar point cloud domains and large amounts of total data computation, which results in a low target detection rate and high misjudgment rate. To improve the accuracy and speed of target recognition, a three-dimensional target recognition algorithm based on infrared features was proposed. The system simultaneously obtains the 2D infrared image and 3D point cloud data of the target area, obtains the projection range of the target using the salient features of the target's infrared characteristics, and calculates the pose relationship between the system and the target. The limited range of the target in the point cloud data is calculated according to the infrared feature mapping relationship, thereby significantly reducing the total number of point clouds that need to be matched and calculated. In the experiment, the same target vehicle was tested under the same background conditions, and the recognition data for three different test angles were recorded and analyzed. The obtained results indicated that the average target detection rate of the conventional point cloud recognition algorithm, average false positive rate, and convergence time were 93.4%, 19.5%, and 4.77 s, respectively. In addition, the average target detection rate of this algorithm, average false positive rate, and convergence time were 98.7%, 1.5%, and 1.23 s, respectively. It can be inferred that the detection and misjudgment rates of the target recognition algorithm based on infrared features are more advantageous, and the processing speed is faster.
  • [1]
    REN S, HE K, Girshick R, et al. Faster R-CNN: Real-time target detection in regional planning Network[J]. IEEE Pair Model Analysis and Machinery Information, 2017, 39(6): 1137-1149.
    [2]
    GUO Y L, Sohel F, Bennamoun M, et al. A novel local surface feature for 3D object recognition under clutter and occlusion[J]. Information Sciences, 2015, 293: 196-213. DOI: 10.1016/j.ins.2014.09.015
    [3]
    王果, 王成, 张振鑫, 等. 利用车载激光点云的分车带识别及单木分割方法[J]. 激光与红外, 2020, 50(11): 1333-1337. DOI: 10.3969/j.issn.1001-5078.2020.11.008

    WANG Guo, WANG Cheng, ZHANG Zhenxin, et al. Single tree segmentation method of urban distributing belt based on vehicle-borne laser point cloud data[J]. Laser & Infrared, 2020, 50(11): 1333-1337. DOI: 10.3969/j.issn.1001-5078.2020.11.008
    [4]
    胡海瑛, 惠振阳, 李娜. 基于多基元特征向量融合的机载LiDAR点云分类[J]. 中国激光, 2020, 47(8): 229-239. https://www.cnki.com.cn/Article/CJFDTOTAL-JJZZ202008029.htm

    HU Haiying, HUI Zhenyang, LI Na. Airborne LiDAR point cloud classification based on multiple-entity eigenvector fusion[J]. Chinese Journal of Lasers, 2020, 47(8): 229-239. https://www.cnki.com.cn/Article/CJFDTOTAL-JJZZ202008029.htm
    [5]
    Sochor J, Spaňhel J, Herout A. Boxcars: improving fine-grained recognition of vehicles using 3-d bounding boxes in traffic surveillance[J]. IEEE Transactions on Intelligent Transportation Systems, 2018, 20(1): 97-108.
    [6]
    Sochor J, Herout A, Havel J. BoxCars: 3D boxes as CNN input for improved fine-grained vehicle recognition[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 3006-3015.
    [7]
    Dubská M, Herout A, Juránek R, et al. Fully automatic roadside camera calibration for traffic surveillance[J]. IEEE Transactions on Intelligent Transportation Systems, 2014, 16(3): 1162-1171.
    [8]
    Dubská M, Herout A, Sochor J. Automatic camera calibration for traffic understanding[C]// Proceedings of the British Machine Vision Conference(BMVC), 2014: 1-12.
    [9]
    GISEOK K, JAE- SOO C. Vision- Based vehicle detection and inter- vehicle distance estimation for driver alarm system[J]. Optical Review, 2012, 25(6): 388- 393.
    [10]
    薛培林, 吴愿, 殷国栋, 等. 基于信息融合的城市自主车辆实时目标识别[J]. 机械工程学报, 2020, 56(12): 165-173. https://www.cnki.com.cn/Article/CJFDTOTAL-JXXB202012021.htm

    XUE Peilin, WU Yuan, YIN Guodong, et al. Real-time target recognition for urban autonomous vehicles based on information fusion[J]. Journal of Mechanical Engineering, 2020, 56(12): 165-173. https://www.cnki.com.cn/Article/CJFDTOTAL-JXXB202012021.htm
    [11]
    仝选悦, 吴冉, 杨新锋, 等. 红外与激光融合目标识别方法[J]. 红外与激光工程, 2018, 47(5): 158-165. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201805025.htm

    TONG Xuanyue, WU Ran, YANG Xinfeng, et al. Fusion target recognition method of infrared and laser[J]. Infrared and Laser Engineering, 2018, 47(5): 158-165. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201805025.htm
    [12]
    YAN Y, MAO Y, LI B. Second: sparsely embedded convolutional detection[J]. Sensors, 2018, 18(10): 3337/1-17.

Catalog

    Article views (156) PDF downloads (46) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return