Please wait a minute...
Frontiers of Optoelectronics

ISSN 2095-2759

ISSN 2095-2767(Online)

CN 10-1029/TN

Postal Subscription Code 80-976

Front. Optoelectron.    2020, Vol. 13 Issue (4) : 425-432    https://doi.org/10.1007/s12200-020-0967-5
RESEARCH ARTICLE
Area-based non-maximum suppression algorithm for multi-object fault detection
Jieyin BAI1(), Jie ZHU2, Rui ZHAO1, Fengqiang GU3, Jiao WANG3
1. Nanrui Group Co., Ltd., Beijing 100192, China
2. State Grid Beijing Electric Power Company, Beijing 100031, China
3. Beijing Kedong Electric Power Control System Co., Ltd., Beijing 100192, China
 Download: PDF(976 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Unmanned aerial vehicle (UAV) photography has become the main power system inspection method; however, automated fault detection remains a major challenge. Conventional algorithms encounter difficulty in processing all the detected objects in the power transmission lines simultaneously. The object detection method involving deep learning provides a new method for fault detection. However, the traditional non-maximum suppression (NMS) algorithm fails to delete redundant annotations when dealing with objects having two labels such as insulators and dampers. In this study, we propose an area-based non-maximum suppression (A-NMS) algorithm to solve the problem of one object having multiple labels. The A-NMS algorithm is used in the fusion stage of cropping detection to detect small objects. Experiments prove that A-NMS and cropping detection achieve a mean average precision and recall of 88.58% and 91.23%, respectively, in case of the aerial image datasets and realize multi-object fault detection in aerial images.

Keywords fault detection      area-based non-maximum suppression (A-NMS)      cropping detection     
Corresponding Author(s): Jieyin BAI   
Online First Date: 11 June 2020    Issue Date: 31 December 2020
 Cite this article:   
Jieyin BAI,Jie ZHU,Rui ZHAO, et al. Area-based non-maximum suppression algorithm for multi-object fault detection[J]. Front. Optoelectron., 2020, 13(4): 425-432.
 URL:  
https://academic.hep.com.cn/foe/EN/10.1007/s12200-020-0967-5
https://academic.hep.com.cn/foe/EN/Y2020/V13/I4/425
Fig.1  Structure of the detection network, where the detector is followed by a classifier
Fig.2  Four objects to be detected. (a) Intact insulator labeled “good”; (b) string-off insulator labeled “bad”; (c) intact damper labeled “double”; (d) shedding damper labeled “single”
Fig.3  Detection results of a string-off insulator. The blue box is the expected “good” label, whereas the red boxes are unexpected “bad” labels
Fig.4  Diagram of image cropping. (a) Original image, where the 1/4 and 3/4 points on the x-axis and y-axis are the cropping points; (b) top left subpicture cropped by the yellow lines from the original image; (c) top right subpicture cropped by the purple lines from the original image; (d) bottom left subpicture cropped by the green lines from the original image; (e) bottom right subpicture cropped by the orange lines from the original image
Fig.5  Box fusion algorithm. (a) Two detection boxes; (b) fused detection box. The background green box is the output box obtained by fusing the two detection boxes in (a)
Fig.6  Sensitivity of different methods to the threshold T in NMS, A-NMS, and cropping detection. Threshold T is 0.3–0.9 with intervals of 0.1. Black bars indicate the traditional NMS algorithm, orange bars indicate the A-NMS algorithm, and green bars indicate the cropping detection method
Fig.7  Detection results of the NMS and A-NMS algorithms. The green box represents a string-off insulator labeled “bad”, whereas the blue box represents an intact insulator labeled “good”. (a) Detection results of the traditional NMS algorithm. The “bad” box is correct, whereas the “good” box is an additional box; (b) detection results of the A-NMS algorithm. The “bad” box is correct
Fig.8  Impact of cropping detection on four types of objects: “good”, “bad”, “double”, and “single”. Red bars indicate the results of the benchmark method, whereas blue bars indicate the results of the cropping detection method. Here, AP refers to average precision
Fig.9  Results of the benchmark and cropping detection methods. Detection results of (a) benchmark algorithm and (b) cropping detection method
benchmark A-NMS cropping detection mAP recall
faster R-CNN
+
ResNet101
0.8142 0.8421
0.8594 0.8875
0.8858 0.9123
Tab.1  mAP and recall for different methods. “√” indicates that the corresponding algorithm is used
detection scheme number of GPU time/ms
benchmark 1 210
benchmark+ A-NMS 1 212
A-NMS+ cropping detection 1 850
A-NMS+ cropping detection 4 220
Tab.2  Detection time of different methods in different GPU environments
1 J Sun. Research on Diagnosis of Insulator Crack Based on Edge Detection. Beijing: North China Electric Power University, 2008 (in Chinses)
2 F Y Zhang. Identification and Research of Abnormal Patrol Diagram of Transmission Line Based on Computer Vision. Changchun: Jilin University, 2015 (in Chinese)
3 G E Hinton, R R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504–507
https://doi.org/10.1126/science.1127647 pmid: 16873662
4 A Krizhevsky, I Sutskever, G E Hinton. ImageNet classification with deep convolutional neural networks. In: Proceedings of Conference on Neural Information Processing Systems, 2012, 1106–1114
5 K Simonyan, A Zisserman. Very deep convolutional network for large-scale image recognition. In: Proceedings of IEEE International Conference of Learning Representation, 2015, 1–5
6 C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna. Rethinking the inception architecture for computer vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016, 2818–2826
7 K He, X Y Zhang, S Q Ren, J Sun. Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016, 770–778
8 K P Lee, B H Wu, S L Peng. Deep-learning-based fault detection and diagnosis of air-handling units. Building and Environment, 2019, 157: 24–33
https://doi.org/10.1016/j.buildenv.2019.04.029
9 T Y Lin, P Dollar, R Girshick, K M He. Feature pyramid networks for object detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017, 2117–2125
10 S Ren, K He, R Girshick, J Sun. Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of Conference on Neural Information Processing Systems, 2015, 91–99
11 W Yan, L Yu. On accurate and reliable anomaly detection for gas turbine combustors: a deep learning approach. arXiv:1908.09238, 2019
12 B Luo, H Wang, H Liu, B Li, F Peng. Early fault detection of machine tools based on deep learning and dynamic identification. IEEE Transactions on Industrial Electronics, 2019, 66(1): 509–518
https://doi.org/10.1109/TIE.2018.2807414
13 W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C Y Fu, A C Berg. SSD: single shot multibox detector. In: Proceedings of European Conference on Computer Vision, 2016, 21–37
14 Z Cai, N Vasconcelos. Cascade R-CNN: delving into high quality object detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2018, 6154–6162
15 W G Wang, B Tian, Y Liu, L Liu, J X Li. Research on power component identification of UAV inspection image based on RCNN. Journal of Earth Information Science, 2017, 2(19): 256–263
16 Y Liu, L Jin, S Zhang, Z Sheng. Detecting curve text in the wild: new dataset and new solution. arXiv: 1712.02170, 2017
17 Y Dai, Z Huang, Y Gao, K Chen. Fused text segmentation networks for multi-oriented scene text detection. In: Proceedings of the 24th International Conference on Pattern Recognition, 2018, 3604–3609
18 A Abdurashitov, V V Lychagov, O A Sindeeva, O V Semyachkina-Glushkovskaya, V V Tuchin. Histogram analysis of laser speckle contrast image for cerebral blood flow monitoring. Frontiers of Optoelectronics, 2015, 8(2): 187–194
https://doi.org/10.1007/s12200-015-0493-z
19 M Sudhakar, V Reddy, Y Rao. Influence of optical filtering on transmission capacity in single mode fiber communications. Frontiers of Optoelectronics, 2015, 8(4): 424–430
https://doi.org/10.1007/s12200-014-0426-2
20 G Huang, Z Liu, L Maaten. Densely connected convolutional networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017, 4700–4708
21 T Y Lin, M Maire, S Belongie, J Hays, P, Perona D Ramanan, P Dollár, C L. Zitnick Microsoft COCO: common objects in context. In: Proceedings of European Conference on Computer Vision, 2014, 740–755
22 M Everingham, L Van Gool, C K I Williams, J Winn, A Zisserman. The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 2010, 88(2): 303–338
https://doi.org/10.1007/s11263-009-0275-4
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed