Abstract:
Object detection is a popular research topic and fundamental task in computer vision. Anchor-based object detection has been widely used in many fields. Current anchor selection methods face two main problems: a fixed size of a priori values based on a specific dataset and a weak generalization ability in different scenarios. The unsupervised K-means algorithm for calculating anchor frames, which is significantly influenced by initial values, generates less variation in anchor points for clustering datasets with a single object size and cannot reflect the multiscale output of the network. In this study, a multiscale anchor (MSA) method that introduces multiscale optimization was developed to address these issues. This method scales and stretches the anchor points generated by clustering according to the dataset characteristics. The optimized anchor points retain the characteristics of the original dataset and reflect the advantages of the multiple scales of the model. In addition, this method was applied to the preprocessing phase of training without increasing the model inference time. Finally, the single-stage mainstream algorithm, You Only Look Once (YOLO), was selected to perform extensive experiments on different scenes of the infrared and industrial scene datasets. The results show that the MSA method can significantly improve the detection accuracy of small-sample scenes.