Journal of Guangdong University of Technology ›› 2023, Vol. 40 ›› Issue (04): 67-76.doi: 10.12052/gdutxb.220139

• Computer Science and Technology • Previous Articles     Next Articles

Helmet Wearing Detection Algorithm Intergrating Transfer Learning and YOLOv5

Cao Zhi-xiong1, Wu Xiao-ling1, Luo Xiao-wei2, Ling Jie1   

  1. 1. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China;
    2. Department of Architecture and Civil Engineering, City University of Hong Kong, Hong Kong 999077, China
  • Received:2022-09-13 Online:2023-07-25 Published:2023-08-02

Abstract: To address the problems of missing detection and low detection accuracy of the existing helmet wearing detection algorithms for small and crowded targets detection, this paper proposes a helmet wearing detection method based on improved YOLOv5 and transfer learning. First, different from the default priori frame that is not suitable for the task, we use the K-means algorithm to cluster the suitable priori frame size for the detection task. Then, in the back of the feature extraction network, we introduce a spatial channel mixed attention module to strengthen the learning of relevant weights and suppress the weights of irrelevant backgrounds, respectively. Further, we improve the judgment metric of the non-maximum-suppression (NMS) algorithm in the post-processing stage of YOLOv5 to reduce the phenomenon of false deletion and missing of prediction boxes. After that, the proposed network is trained based on the strategy of transfer learning, which can overcome the scarcity of limited existing data sets and improve the generalization ability of the model. Finally, we build a cascade judgment framework for helmet wearing deployed in visual sensor networks. The experimental results show that our proposed method improves the average accuracy (IOU=0.5) to 93.6%, which is 5% higher than the original model in the helmet wearing data set. The proposed model also outperforms other state-of-the-art algorithms by obviously improving the accuracy of helmet wearing detection in the construction scenarios.

Key words: helmet wearing detection, YOLOv5, transfer learning, attention module, visual sensor network

CLC Number: 

  • TP391.41
[1] 郭师虹, 井锦瑞, 张潇丹, 等. 基于改进的YOLOv4安全帽佩戴检测研究[J]. 中国安全生产科学技术, 2021, 17(12): 135-141.GUO S H, JING J R, ZHANG X D, et al. Researchon detection of safety helmet wearing basedon improved YOLOv4[J]. China Safety Production Science and Technology, 2021, 17(12): 135-141.
[2] 岳诗琴, 张乾, 邵定琴, 等. 基于ResNet50-SSD的安全帽佩戴状态检测研究[J]. 长江信息通信, 2021, 34(3): 86-89.YUE S Q, ZHANG Q, SHAO D Q, et al. Safety helmat wearing status detection study based on ResNet50-SSD[J]. Yangtze River Information and Communication, 2021, 34(3): 86-89.
[3] GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]// CVPR 2014: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 580-587.
[4] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149.
[5] REDMON J, DIVVALA S, GiIRSHICK R, et al. You only look once: unified, real-time object detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788.
[6] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]// IEEE Conference on Computer Vision & Pattern Recognition. Piscataway: IEEE, 2017: 6517-6525.
[7] REDMON J, FARHADI A. YOLOv3: an incremental improvement[EB/OL]. (2018-04-08)[2021-12-21].https://arxiv.org/pdf/1804.02767.pdf.
[8] BOCHKOVSKIY A, WANG C Y, LIAO H. YOLOv4: optimal speed and accuracy of object detection[EB/OL]. (2020-04-23)[2021-12-21].https://arxiv.org/pdf/2004.10934.pdf.
[9] 林俊, 党伟超, 潘理虎, 等. 基于YOLO的安全帽检测方法[J]. 计算机系统应用, 2019, 28(9): 174-179.LIN J, DANG W C, PAN L H, et al. Safety helmet detection based on YOLO[J]. Computer System Applications, 2019, 28(9): 174-179.
[10] 方明, 孙腾腾, 邵桢. 基于改进YOLOv2的快速安全帽佩戴情况检测[J]. 光学精密工程, 2019, 27(5): 1196-1205.FANG M, SUN T T, SHAO Z. Fast helmet-wearing-condition detection based onimproved YOLOv2[J]. Optical Precision Engineering, 2019, 27(5): 1196-1205.
[11] CHENG R, HE X W, ZHENG Z L, et al. Multi-scale safety helmet detection based on SAS-YOLOv3-Tiny[J]. Applied Sciences, 2021, 11(8): 3652.
[12] CHEN W, LIU M, ZHOU X, et al. Safety helmet wearing detection in aerial images using improved YOLOv4[J]. Computers, Materials, Continua, 2022(8): 16.
[13] 黄志强, 李军. 基于空间通道注意力机制与多尺度融合的交通标志识别研究[J]. 南京邮电大学学报(自然科学版), 2022, 42(2): 93-102.HUANG Z Q, LI J. Research on traffic sign recognition based on spatial channel attention mechanism and multi-scale fusion[J]. Journal of Nanjing University of Posts and Telecommunications (Natural Science Edition), 2022, 42(2): 93-102.
[14] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision(ECCV). Munich: Springer, 2018: 3-19.
[15] JIE H, LI S, GANG S, et al. Squeeze-and-excitation networks[EB/OL]. (2019-05-16)[2021-12-21].https://arxiv.org/abs/1709.01507.
[16] YING X, WANG Y, WANG L, et al. A stereo attention module for stereo image super-resolution[J]. IEEE Signal Processing Letters, 2020, 27(99): 496-500.
[17] ZHENG Z, WANG P, LIU W, et al. Distance-IoU Loss: faster and better learning for bounding box regression[EB/OL]. (2019-11-19)[2021-12-21]. https://arxiv.org/abs/1911.08287.
[18] ZHANG D, WU W, CHENG H, et al. Image-to-video person re-identification with temporally memorized similarity learning[J]. IEEE Transactions on Circuits & Systems for Video Technology, 2018, 28(10): 2622-2632.
[1] Zhang Yu, Liu Bo. Research on Inductive Transfer Learning Model Based on Self-paced Learning Strategy [J]. Journal of Guangdong University of Technology, 2023, 40(04): 31-36.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!