تشخیص هوشمند عیوب کوچک خطوط انتقال برق در تصاویر پهپادی با استفاده از DRSPTL

نوع مقاله : مقاله پژوهشی

نویسندگان

1 دانشگاه شهید باهنر کرمان-ایران

2 دانشکده فیزیک، دانشگاه شهید باهنر کرمان، کرمان، ایران

چکیده

در سال­های اخیر تشخیص اشیاء کوچک با استفاده از تکنیک­های یادگیری عمیق در بسیاری از کاربردهای عملی مورد توجه خاص قرار گرفته است و امری چالش برانگیز می­باشد، زیرا اشیاء کوچک در تصاویر وضوح کمی دارند و حاوی اطلاعات دقیق نیستند. در این مقاله یک آشکارساز دومرحله ­ای جدید مبتنی بر تشخیص اشیاء با هرم ویژگی بازگشتی و نرخ  Atrousبا استفاده از آشکارساز (DetectoRS) جهت تشخیص هوشمند عیوب کوچک و مهم خطوط انتقال برق معرفی شده و معماری DetectoRS در این راستا به طور کامل اصلاح شده است. در روش پیشنهادی DRSPTL از Cascade R-CNN  با  ResNext-101جهت افزایش دقت در تشخیص عیوب کوچک استفاده شده است. در این مقاله تصاویر RGB با وضوح بالا توسط پهپاد از خطوط انتقال شرکت­های برق منطقه­ای تهران، کرمان، شیراز، اصفهان و اهواز تهیه شده، و مجموعه داده­های آموزش و تست مربوط به عیوب توسط گروهی از متخصصین آماده شده است. برای ساخت داده­های آموزش، تقریباً 80% از کل مجموعه تصاویر حاوی عیوب کوچک، انتخاب و برچسب­ گذاری شدند. DRSPTL بالاترین دقت را در مقایسه با دو روش معتبر در زمینه تشخیص اشیاء RetinaNet و RepPoints دارا می­باشد. قابل ذکر است که با توجه به نتایج بدست آمده می­ توان با شناسایی اتوماتیک عیوب و جلوگیری از وقوع بسیاری از قطعی­های برق، باعث کاهش چشمگیر زمان و هزینه شرکت­های برق منطقه­ ای شد.

کلیدواژه‌ها


عنوان مقاله [English]

Automatic Small Defect Detection in Unmanned Aerial Vehicle Images of Power Transmission Lines using DRSPTL

نویسندگان [English]

  • Mitra peyrohoseini nejad 1
  • Azam Karami 2
1 Shahid Bahonar University-Kerman-Iran
2 Faculty of Physics, Shahid Bahonar University, Kerman, Kerman, Iran
چکیده [English]

Recently, small object recognition based on deep learning techniques has gained particular attention in many practical applications and is challenging because small objects have low resolution and do not contain detailed information.  In this article, a new two-stage detector based on detecting objects with recursive feature pyramid and switchable atrous convolution (DetectoRS) has been introduced to find small and important defects such as loose nut-bolts and missing-nut in power transmission lines (PTL). The architecture of DetectoRS was necessarily modified. The proposed technique which is called DRSPTL, the Cascade R-CNN with ResNext-101 is used to increase the accuracy of small defect detection. In this work, high-resolution RGB images are captured by unmanned aerial vehicles (UAVs) imaging PTL from Tehran, Kerman, Shiraz, Isfahan, and Ahwaz regional electric companies, Iran. The training and test datasets from the captured faulty images are created from annotation by experts. To construct the training dataset, nearly eighty percent of the whole set of faulty images were selected and labeled. The performance of the proposed method with two state-of-the-art object detection techniques RetinaNet and RepPoints has been compared. DRSPTL has the highest small defect detection accuracy. It is noteworthy that the obtained results could significantly reduce the time and cost of electric power companies by detecting the defects automatically and preventing the occurrence of many power outages.

کلیدواژه‌ها [English]

  • Deep Learning
  • Small Defect Detection
  • Power Transmission Lines
  • UAV Images
[1] M. Bühringer et al., "Cable‐crawler–robot for the inspection of high‐voltage power lines that can passively roll over mast tops," Industrial Robot: An International Journal, 2010.
[2] R. S. Gonçalves and J. C. M. Carvalho, "Review and latest trends in mobile robots used on power transmission lines," International Journal of Advanced Robotic Systems, vol. 10, no. 12, p. 408, 2013.
[3] H. Law and J. Deng, "Cornernet: Detecting objects as paired keypoints," in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 734-750.
[4] X. Zhou, J. Zhuo, and P. Krahenbuhl, "Bottom-up object detection by grouping extreme and center points," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 850-859.
[5] L. Jiao et al., "A survey of deep learning-based object detection," IEEE access, vol. 7, pp. 128837-128868, 2019.
[6] R. Girshick, "Fast R-CNN," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440-1448.
[7] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 6, pp. 1137-1149, 2016.
[8] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988.
[9] T. Kong, F. Sun, H. Liu, Y. Jiang, L. Li, and J. Shi, "Foveabox: Beyound anchor-based object detection," IEEE Transactions on Image Processing, vol. 29, pp. 7389-7398, 2020.
[10] Z. Tian, C. Shen, H. Chen, and T. He, "FCOS: Fully convolutional one-stage object detection," in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9627-9636.
[11] X. Zhou, D. Wang, and P. Krähenbühl, "Objects as points," arXiv preprint arXiv:1904.07850, 2019.
[12] X. Lu, B. Li, Y. Yue, Q. Li, and J. Yan, "Grid R-CNN," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7363-7372.
[13] L. Cui et al., "MDSSD: multi-scale deconvolutional single shot detector for small objects," arXiv preprint arXiv:1805.07009, 2018.
[14] J. Redmon and A. Farhadi, "YOLOv3: An incremental improvement," arXiv preprint arXiv:1804.02767, 2018.
[15] Y. Bai, Y. Zhang, M. Ding, and B. Ghanem, "SOD-MTGAN: Small object detection via multi-task generative adversarial network," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 206-221.
[16] J. Pang, C. Li, J. Shi, Z. Xu, and H. Feng, " R2-CNN: Fast Tiny Object Detection in Large-scale Remote Sensing Images," IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8, pp. 5512-5524, 2019.
[17] X. Yang et al., "SCRDET: Towards more robust detection for small, cluttered and rotated objects," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 8232-8241.
[18] Z. Yang, S. Liu, H. Hu, L. Wang, and S. Lin, "RepPoints: Point set representation for object detection," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 9657-9666.
[19] Y. Chen, Z. Zhang, Y. Cao, L. Wang, S. Lin, and H. Hu, "RepPoints v2: Verification meets regression for object detection," Advances in Neural Information Processing Systems, vol. 33, 2020.
[20] S. Qiao, L.-C. Chen, and A. Yuille, "Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10213-10224.
[21] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[22] K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in European conference on computer vision, 2016, pp. 630-645: Springer.
[23] Z. Cai and N. Vasconcelos, "Cascade R-CNN: Delving into high quality object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6154-6162.
[24] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
[25] J. Dai et al., "Deformable convolutional networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 764-773.
[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012.
[27] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[28] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[29] J. Dai, Y. Li, K. He, and J. Sun, "R-FCN: Object detection via region-based fully convolutional networks," in Advances in neural information processing systems, 2016, pp. 379-387.
[30] H. Wang, Z. Li, X. Ji, and Y. Wang, "FaceR-CNN," arXiv preprint arXiv:1706.01061, 2017.
[31] A. Senior, G. Heigold, M. a. Ranzato, and K. Yang, "An empirical study of learning rates in deep neural networks for speech recognition," in 2013 IEEE international conference on acoustics, speech and signal processing, 2013, pp. 6724-6728: IEEE.