Vehicle wheel weld detection based on improved YOLO V4 algorithm

Автор: Liang Tian Jiao, Pan Wei Guo, Bao Hong, Pan Feng

Журнал: Компьютерная оптика @computer-optics

Рубрика: Обработка изображений, распознавание образов

Статья в выпуске: 2 т.46, 2022 года.

Бесплатный доступ

In recent years, vision-based object detection has made great progress across different fields. For instance, in the field of automobile manufacturing, welding detection is a key step of weld inspection in wheel production. The automatic detection and positioning of welded parts on wheels can improve the efficiency of wheel hub production. At present, there are few deep learning based methods to detect vehicle wheel welds. In this paper, a method based on YOLO v4 algorithm is proposed to detect vehicle wheel welds. The main contributions of the proposed method are the use of k-means to optimize anchor box size, a Distance-IoU loss to optimize the loss function of YOLO v4, and non-maximum suppression using Distance-IoU to eliminate redundant candidate bounding boxes. These steps improve detection accuracy. The experiments show that the improved methods can achieve high accuracy in vehicle wheel weld detection (4.92 % points higher than the baseline model with respect to AP75 and 2.75 % points higher with respect to AP50). We also evaluated the proposed method on the public KITTI dataset. The detection results show the improved method’s effectiveness.

Еще

Object detection, vehicle wheel weld, yolo v4, diou

Короткий адрес: https://sciup.org/140293812

IDR: 140293812

Список литературы Vehicle wheel weld detection based on improved YOLO V4 algorithm

  • Viola P, Jones M. Robust real-time object detection. Int J Comput Vis 2004; 57(2): 137-154.
  • Chen TT, Wang RL, Dai B, Liu DX, Song JZ. Likelihood-field-model-based dynamic vehicle detection and tracking for self-driving. IEEE trans Intell Transp Syst 2016; 17(11): 3142-3158.
  • Fu ZH, Chen YW, Yong HW, Jiang RX, Zhang L, Hua XS. Foreground gating and background refining net-work for surveillance object detection. IEEE Trans Image Process 2019; 28(12): 6077-6090.
  • Kong H, Yang J, Chen ZH. Accurate and efficient inspection of speckle and scratch defects on surfaces of planar products. IEEE Trans Industr Inform 2017; 13(4): 1855-1865.
  • Guo ZX, Shui PL. Anomaly based sea-surface small target detection using k-nearest neighbour classification. IEEE Trans Aerosp Electron Syst 2020; 56(6): 4947-4964.
  • Imoto K, Nakai T, Ike T, Haruki K, Sato Y. A CNN-based transfer learning method for defect classifica-tion in semiconductor manufacturing. IEEE Trans Semicond Manuf 2019, 32(4): 455-459.
  • Pashina TA, Gaidel AV, Zelter PM, Kapishnikov AV, Ni-konorov AV. Automatic highlighting of the region of interest in computed tomography images of the lungs. Computer Optics 2020; 44(1): 74-81. DOI: 10.18287/2412-6179-CO-659.
  • Zou ZX, Shi ZW, Guo YH, Ye JP. Object detection in 20 years: A survey. arXiv Preprint 2019. Source:
  • Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998, 86(11): 2278-2324.
  • Lowe DG. Object recognition from local scale-invariant features. IEEE Int Conf on Computer Vision, Kerkyra 1999: 1150-1157. DOI: 10.1109/ICCV.1999.790410.
  • Dalal N, Triggs B. Histograms of oriented gradients for human detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego 2005: 886-893. DOI: 10.1109/CVPR.2005.177.
  • Suykens JAK, Vandewalle J. Least squares support vector machine classifiers. Neural Process Lett 1999; 9: 293-300.
  • Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Int Conf on Neural Information Processing Systems, New York 2012: 1097-1105.
  • Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. IEEE Conf on Computer Vision and Pattern Recognition, Las Vegas 2016: 779-788. DOI: 10.1109/CVPR.2016.91.
  • Redmon J, Farhadi A. YOLO9000: better, faster, stronger. IEEE Conf on Computer Vision and Pattern Recognition, Honolulu 2017: 7263-7271. DOI: 10.1109/CVPR.2017.690.
  • Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv Preprint 2018. Source:
  • Bochkovskiy A, Wang CY, Mark-Liao HY. YOLOv4: Optimal speed and accuracy of object detection. arXiv Preprint 2020. Source:
  • Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY,Berg AC. SSD: Single shot multibox detector, European Conf on Computer Vision European, Cham 2016: 21-37.
  • Gidaris S, Komodakis N. Object detection via a multiregion and semantic segmentation-aware CNN model. Int Conf on Computer Vision, Santiago 2015: 1134-1142. DOI: 10.1109/ICCV.2015.135.
  • Girshick R. Fast R-CNN. Int Conf on Computer Vision, Santiago 2015: 1440-1448. DOI: 10.1109/ICCV.2015.169.
  • Ren SQ, He KM, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2016; 39(6): 1137-1149.
  • He KM, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Int Conf on Computer Vision, Venice 2017: 29802988. DOI: 10.1109/ICCV.2017.322.
  • Cai ZW, Vasconcelos N. Cascade R-CNN: Delving into high quality object detection. IEEE Conf on Computer Vision and Pattern Recognition, Salt Lake City 2018: 61546162. DOI: 10.1109/CVP R.2018.00644.
  • Zhou HY, Zhuang ZL, Liu Y, Liu Y, Zhang X. Defect classification of green plums based on deep learning. Sensors 2020; 20(23): 6993.
  • Huang LC, Yang Y, Deng YF, Yu YN. Densebox: Unifying landmark localization with end to end object detection. arXiv Preprint 2015. Source:
  • Rezatofighi H, Tsoi N, Gwak JY, Sadeghian A, Reid L, Savarese S. Generalized intersection over union: A metric and a loss for bounding box regression. IEEE Conf on Computer Vision and Pattern Recognition, Long Beach 2019: 658-666. DOI: 10.1109/CVPR.2019.00075.
  • Everingham M, Gool LV, Williams CKI, Winn J, Zis-serman A. The PASCAL visual object classes (VOC) Challenge. Int J Comput Vis 2010; 88: 303-338.
  • Wang CY, Mark-Liao HY, Wu YH, Chen PY, Hsieh JW, Yeh IH. CSPNet: A new backbone that can enhance learning capability of CNN. IEEE Conf on Computer Vision and Pattern Recognition Workshops 2020: 1571-1580. DOI: 10.1109/CVPRW50498.2020.00203.
  • Zheng ZH, Wang P, Liu W, Li JZ, Ye RG, Ren DW. Dis-tance-IoU Loss: Faster and better learning for bounding box regression. arXiv Preprint 2019. Source: (https://arxiv.org/abs/1911.08287).
  • Bodla N, Singh B, Chellappa R, Davis LS. Soft-NMS -Improving object detection with one line of code. IEEE Int Conf on Computer Vision, Venice 2017: 5562-5570. DOI: 10.1109/ICCV.2017.593.
  • He YH, Zhang XY, Savvides M, Kitani K. Bounding box regression with uncertainty for accurate object detection. arXiv Preprint 2018. Source: (https://arxiv.org/abs/1809.08545).
  • Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. IEEE Conf on Computer Vision and Pattern Recogni-tion 2012; 3354-3361. DOI: 10.1109/CVPR.2012.6248074.
Еще
Статья научная