Image Generation for Surface Defects of Underwater Structures Based on Deep Convolutional Generative Adversarial Networks

CHENG Feng-wen, GAN Jin, LI Xing, WU Wei-guo

Journal of Changjiang River Scientific Research Institute ›› 2023, Vol. 40 ›› Issue (9) : 155-161.

PDF(5864 KB)
PDF(5864 KB)
Journal of Changjiang River Scientific Research Institute ›› 2023, Vol. 40 ›› Issue (9) : 155-161. DOI: 10.11988/ckyyb.20220421
Engineering Safety and Disaster Prevention

Image Generation for Surface Defects of Underwater Structures Based on Deep Convolutional Generative Adversarial Networks

  • CHENG Feng-wen1, GAN Jin1,2, LI Xing2, WU Wei-guo3
Author information +
History +

Abstract

The aim of this study is to improve the quality and quantity of the dataset for surface defect images of underwater structures and facilitate the application of deep learning methods in underwater detection. A method for generating surface defect images of underwater structures is proposed based on the deep convolutional generative adversarial networks (DCGAN). First, the image quality is guaranteed by designing an underwater image acquisition device through the adjustment of shooting distance and the supplement of light intensity. Second, by improving the loss function and optimizing DCGAN, the image generation model for surface defect of underwater structures is established. Finally, the effectiveness of the generated images is assessed using the YOLOv5 detection network. The results demonstrate an average peak signal-to-noise ratio of 21.142 6 dB and an average structural similarity of 0.716 8 for the generated crack images of underwater structures. Integrating the generated and real images into the detection model effectively improves the accuracy of detection. The study provides technical support for the health detection of hydraulic structures such as dams and headrace tunnels.

Key words

underwater structure / surface defect detection / deep learning / image generation / deep convolutional generative adversarial networks

Cite this article

Download Citations
CHENG Feng-wen, GAN Jin, LI Xing, WU Wei-guo. Image Generation for Surface Defects of Underwater Structures Based on Deep Convolutional Generative Adversarial Networks[J]. Journal of Changjiang River Scientific Research Institute. 2023, 40(9): 155-161 https://doi.org/10.11988/ckyyb.20220421

References

[1] 中华人民共和国水利部. 2020年全国水利发展统计公报[M]. 北京: 中国水利水电出版社, 2021.
[2] 陈 亮. 桥梁水下结构病害分级评定标准研究[J]. 浙江交通职业技术学院学报, 2014, 15(3): 10-15.
[3] 杨万里, 王 浩, 胡邵凯, 等. 桥梁水下构件的外观缺陷检测[J]. 交通世界, 2021(26): 147-148, 153.
[4] 王 薇, 卢圣力, 赵仁基, 等. 高分辨率跨孔超声波透射成像研究与应用: 以某水库大坝裂隙探测为例[J]. 长江科学院院报, 2021, 38(9): 154-158.
[5] 葛 巍, 李宗坤, 李娟娟, 等. 改进的集对分析法在溃坝社会影响评价中的应用[J]. 长江科学院院报, 2020, 37(1): 44-49.
[6] 甘 进, 李世桢, 王 彬, 等. 工程结构物水下检测技术及其应用[J]. 武汉理工大学学报(交通科学与工程版), 2021, 45(3): 499-506.
[7] 陶 显,侯 伟,徐 德.基于深度学习的表面缺陷检测方法综述[J].自动化学报,2021,47(5):1017-1034.
[8] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149.
[9] CHA Y-J,CHOI W,SUH G,et al. Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types[J]. Computer-Aided Civil and Infrastructure Engineering,2018,33(9):731-747.
[10] YING X.An Overview of Overfitting and Its Solutions[J].Journal of Physics:Conference Series,2019,1168:022022.
[11] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[12] 袁培森, 吴茂盛, 翟肇裕, 等. 基于GAN网络的菌菇表型数据生成研究[J]. 农业机械学报, 2019, 50(12): 231-239.
[13] DOMAN K, KONISHI T, MEKADA Y. Lesion Image Synthesis Using DCGANs for Metastatic Liver Cancer Detection[M] //LEE G, FUJITA H. Deep Learning in Medical Image Analysis. Cham: Springer, 2020: 95-106.
[14] CHOI S-H,JUNG S H. Similarity Analysis of Actual Fake Fingerprints and Generated Fake Fingerprints by DCGAN[J]. International Journal of Fuzzy Logic and Intelligent Systems,2019,19(1):40-47.
[15] 王雷雷. 基于GAN的SAR图像生成研究[D]. 成都: 电子科技大学, 2019.
[16] 丁 斌, 夏 雪, 梁雪峰. 基于深度生成对抗网络的海杂波数据增强方法[J]. 电子与信息学报, 2021, 43(7): 1985-1991.
[17] 裴莉莉, 孙朝云, 孙 静, 等. 基于DCGAN的路面裂缝图像生成方法[J]. 中南大学学报(自然科学版), 2021, 52(11): 3899-3906.
[18] SMAIDA M, YAROSHCHAK S, EL BARG Y. DCGAN for Enhancing Eye Diseases Classification[J]. Computer Modeling and Intelligent Systems, 2021, 2864: 22-33.
[19] BALAYEV K, GULUZADE N, AYGN S, et al. The Implementation of DCGAN in the Data Augmentation for the Sperm Morphology Datasets[J]. European Journal of Science and Technology, 2021(26): 307-314 .
[20] MIRZA M, OSINDERO S. Conditional Generative Adversarial Nets[EB/OL]. 2014: arXiv: 1411.1784. https://arxiv.org/abs/1411.1784.
[21] RADFORD A, METZ L, CHINTALA S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks[EB/OL]. 2015: arXiv: 1511.06434. https://arxiv.org/abs/1511.06434.
[22] PANETTA K, GAO C, AGAIAN S. Human-Visual-System-Inspired Underwater Image Quality Measures[J]. IEEE Journal of Oceanic Engineering, 2016, 41(3): 541-551.
[23] 王坤峰, 苟 超, 段艳杰, 等. 生成式对抗网络GAN的研究进展与展望[J]. 自动化学报, 2017, 43(3): 321-332.
[24] 李庆旭, 王巧华, 马美湖. 基于生成对抗网络的禽蛋图像数据生成研究[J]. 农业机械学报, 2021, 52(2): 236-245.
[25] HUANGI L, YANG D, LANG B, et al. Decorrelated Batch Normalization[C] //2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-23, 2018, Salt Lake City, UT, USA. New York: IEEE Press, 2018: 791-800.
[26] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved Training of Wasserstein GANs[C] //Proceedings of the 31st International Conference on Neural Information Processing Systems. December 4 - 9, 2017, Long Beach, California, USA. New York: ACM, 2017: 5769-5779.
[27] ZEILER M D, FERGUS R. Visualizing and Understanding Convolutional Networks[C] //Proceedings of the European Conference on Computer Vision. Zurich, Switzerland, September 8-11, 2014. Cham: Springer, 2014: 818-833.
[28] SHMELKOV K, SCHMID C, ALAHARIK. How Good Is My GAN?[C] //Proceedings of the European Conference on Computer Vision (ECCV). Munich, Germany, September 8-12, 2018: 213-229.
PDF(5864 KB)

Accesses

Citation

Detail

Sections
Recommended

/