-
摘要:
航空发动机的结构完整性关乎飞行安全。目前基于孔探技术的航空发动机缺陷检测以人工操作为主。为提高检测精度和效率,提出一种融合注意力和多尺度特征的航空发动机缺陷智能检测算法,以辅助孔探工作。针对原始孔探图像中缺陷样本的类别不平衡问题,采用了一种基于几何变换和泊松图像编辑的多样本融合数据增强方法,丰富小样本图像并构建缺陷数据集。在基准网络YOLOv5中融入协调注意力模块(CA),以强调缺陷特征的提取,增强网络对缺陷目标和复杂背景的区分。在颈部网络中构建加权双向特征金字塔结构(BiFPN),以完成更高层次的特征融合,提升对多尺度目标的表达能力。将边界框回归损失函数定义为EIOU损失,实现对缺陷目标快速、准确的定位和识别。实验结果表明:所提算法检测缺陷的平均精确度达到了89.7%,较基准网络提升了6.3%,训练后的模型大小仅为14.0 M。因此,所提算法可以有效地检测航空发动机的主要缺陷。
Abstract:The structural integrity of the aero-engine is related to flight safety. Currently, the detection of aero-engine defects based on borescope technology is mainly manually operated. In order to improve detection accuracy and efficiency, an intelligent aero-engine defect detection algorithm by fusing attention and multi-scale features was proposed to assist the borescope detection. In view of the imbalanced distribution of defect classes in original borescope images, a multi-sample fusion data augmentation method based on geometric transformation and Poisson image editing was used to enrich the small sample images, and the defect dataset was constructed. The coordination attention (CA) mechanism was integrated into the baseline network YOLOv5 to emphasize the extraction of defect features and enhance the network’s distinction between defect targets and complex backgrounds. The weighted bidirectional feature pyramid network structure (BiFPN) was constructed in the neck network to achieve a higher level of feature fusion and improve the expression ability for multi-scale targets. The bounding box regression loss function was defined as the efficient intersection over union (EIOU) loss. The fast and accurate location and recognition of defects were realized. The experimental results show that the average precision of the proposed algorithm in detecting defects is 89.7%, 6.3% higher than that of the baseline network. The size of the trained model is only 14.0 M. Therefore, the proposed algorithm can effectively detect the main defects of aero-engines.
-
Key words:
- aero-engine /
- defect detection /
- deep learning /
- YOLOv5 /
- attention mechanism
-
表 1 环境配置细节
Table 1. Environment configuration details
配置 类型 Operating System Windows 10 CPU Intel Core i5-12400 GPU NVIDIA GeForce RTX 3070 Ti, 8 GB Pytorch Version 1.10.1 CUDA Version 11.3 cuDNN Version 8.2.0 PyThon Version 3.18.13 OpenCV Version 4.6.0.66 表 2 改进后模型和原始模型的检测性能结果
Table 2. Test performance results of improved model and original model
模型 P/% R/% mAP/% YOLOv5 79.6 86.5 83.4 改进 1 86.8 89.3 89.1 改进 2 87.9 89.6 89.7 改进 3 85.5 88.8 88.0 表 3 消融实验结果
Table 3. Results of ablation experiment
数据增强 CA BiFPN EIOU AP/% mAP/% 检测帧率/(帧·s−1) 氧化 裂纹 缺失 88.5 75.6 86.0 83.4 169.5 √ 88.4 81.8 88.1 86.1 169.5 √ √ 90.7 83.8 88.8 87.8 166.7 √ √ √ 91.7 85.3 89.9 89.0 156.3 √ √ √ √ 92.2 86.2 90.8 89.7 156.3 表 4 注意力机制不同融入位置的准确性结果
Table 4. Accuracy results of different integration positions of attention mechanism
位置 P/% R/% mAP/% 无注意力 82.7 86.7 86.1 位置 1 85.0 87.9 87.3 位置 2 85.4 88.1 87.8 位置 3 82.5 87.2 85.8 表 5 颈部网络不同结构的检测结果
Table 5. Detection results of different structures of neck network
结构 mAP/% 权重大小/MB 检测帧率/(帧·s−1) FPN+PAN 86.1 13.7 169.5 4尺度预测头 87.2 14.6 143.2 BiFPN 87.5 13.9 162.6 表 6 不同损失函数的测试结果
Table 6. Test results of different loss functions
损失函数 AP/% mAP/% 氧化 裂纹 缺失 GIOU 87.7 80.8 87.1 85.2 DIOU 88.6 81.1 87.2 85.7 CIOU 88.4 81.8 88.1 86.1 EIOU 89.5 83.6 88.8 87.3 表 7 不同模型的测试结果
Table 7. Test results of different models
模型 mAP/% 检测帧率/(帧·s−1) 权重大小/MB SSD 78.0 119.3 91.6 Faster R-CNN 82.5 29.8 521.6 RetinaNet 83.6 45.1 139.1 YOLOv5 86.1 169.5 13.7 本文模型 89.7 156.3 14.0 -
[1] PITKÄNEN J, HAKKARAINEN T, JESKANEN H, et al. NDT methods for revealing anomalies and defects in gas turbine blades[J]. Insight, 2001, 43: 601-604. [2] ZOU F Q. Review of aero-engine defect detection technology[C]//Proceedings of the IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference. Piscataway: IEEE Press, 2020: 1524-1527. [3] 旷可嘉. 深度学习及其在航空发动机缺陷检测中的应用研究[D]. 广州:华南理工大学, 2017.KUANG K J. Research on deep learning and its application on the defects detection for aero engine[D]. Guangzhou: South China University of Technology, 2017(in Chinese). [4] MALEKZADEH T, ABDOLLAHZADEH M, NEJATI H, et al. Aircraft fuselage defect detection using deep neural networks[J]. ArXiv e-Prints, 2017: arXiv: 1712.09213. [5] KIM Y H, LEE J R. Videoscope-based inspection of turbofan engine blades using convolutional neural networks and image processing[J]. Structural Health Monitoring, 2019, 18(5-6): 2020-2039. doi: 10.1177/1475921719830328 [6] LI D W, LI Y D, XIE Q, et al. Tiny defect detection in high-resolution aero-engine blade images via a coarse-to-fine framework[J]. IEEE Transactions on Instrumentation Measurement, 2021, 70: 3062175. [7] 李龙浦. 基于孔探数据的航空发动机叶片损伤识别研究[D]. 天津:中国民航大学, 2020.LI L P. Research on damage identification of aeroengine blades based on borescope data[D]. Tianjin: Civil Aviation University of China, 2020(in Chinese). [8] 李彬, 汪诚, 丁相玉, 等. 改进YOLOv4的表面缺陷检测算法[J]. 北京麻豆精品秘 国产传媒学报, 2023, 49(3): 710-717.LI B, WANG C, DING X Y, et al. Surface defect detection algorithm based on improved YOLOv4[J]. Journal of Beijing University of Aeronautics and Astronautics, 2023, 49(3): 710-717(in Chinese). [9] 陈为, 梁晨红. 基于改进SSD的航空发动机目标缺陷检测[J]. 控制工程, 2021, 28(12): 2329-2335.CHEN W, LIANG C H. Aeroengine target defect detection based on improved SSD[J]. Control Engineering of China, 2021, 28(12): 2329-2335(in Chinese). [10] 樊玮, 李晨炫, 邢艳, 等. 航空发动机损伤图像的二分类到多分类递进式检测网络[J]. 计算机应用, 2021, 41(8): 2352-2357.FAN W, LI C X, XING Y, et al. Binary classification to multiple classification progressive detection network for aero-engine damage images[J]. Journal of Computer Applications, 2021, 41(8): 2352-2357(in Chinese). [11] HUANG R, DUAN B K, ZHANG Y X, et al. Prior-guided GAN-based interactive airplane engine damage image augmentation method[J]. Chinese Journal of Aeronautics, 2022, 35(10): 222-232. doi: 10.1016/j.cja.2021.11.021 [12] WANG C Y, MARK LIAO H Y, WU Y H, et al. CSPNet: A new backbone that can enhance learning capability of CNN[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE Press, 2020: 1571-1580. [13] HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[C]//Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence. Piscataway: IEEE Press, 2015: 1904-1916. [14] LIN T Y, DOLLAR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2017: 936-944. [15] LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 8759-8768. [16] YING Z P, LIN Z T, WU Z Y, et al. A modified-YOLOv5s model for detection of wire braided hose defects[J]. Measurement, 2022, 190: 110683. doi: 10.1016/j.measurement.2021.110683 [17] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 7132-7141. [18] HOU Q B, ZHOU D Q, FENG J S. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2021: 13708-13717. [19] TAN M X, PANG R M, LE Q V. EfficientDet: scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020. [20] ZHANG Y F, REN W Q, ZHANG Z, et al. Focal and efficient IOU loss for accurate bounding box regression[J]. Neurocomputing, 2022, 506146-157. [21] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 618-626. [22] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot MultiBox detector[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2016: 21-37. [23] REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]//Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence. Piscataway: IEEE Press, 2017: 1137-1149. [24] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the 16th IEEE International Conference on Computer Vision. Piscataway: IEEE Press, 2017: 2999-3007. -


下载: