Open Access

Optimizing Accuracy and Efficiency in Ship Detection from Satellite Images through a Comparative Analysis of Object Detection Models

Nesli S. Azdavay1*, Zofia G. Zajac2, Hakan M.  Didis3, Ilayda Boz4
1Mersin Univerity, Mersin, Türkiye
2University of Radom, Masovian Voivodeship, Poland
3Mersin Univerity, Mersin, Türkiye
4Tarsus University, Mersin, Türkiye
* Corresponding author: nazdavay@mersin.edu.tr

Presented at the International Trend of Tech Symposium (ITTSCONF2024), İstanbul, Türkiye, Dec 07, 2024

SETSCI Conference Proceedings, 2024, 21, Page (s): 6-11 , https://doi.org/10.36287/setsci.21.2.006

Published Date: 12 December 2024

This study evaluates the performance of Faster R-CNN and YOLOv7 object detectors for dynamic ship detection in maritime applications. Both algorithms are tested under challenging conditions, including poor image quality due to cloud and dust obscuration, varying lighting, and high-altitude captures. The necessity for detection methods with high accuracy and efficiency in such environments is discussed. Results show that Faster R-CNN outperforms YOLOv7 in detection accuracy, achieving superior precision, recall, mAP, and F1 scores. This accuracy allows for successful identification and classification of ships even in low-quality images. However, despite its accuracy advantage, Faster R-CNN falls short in speed, with an average detection time of 53.4 seconds, making it less viable for real-time use. Conversely, YOLOv7 processes images significantly faster, with an extraction time of just 21.6 seconds, though at the cost of lower accuracy. Its rapid processing makes YOLOv7 more suitable for real-time applications requiring quick decisions. This study underscores the importance of balancing detection accuracy and speed when selecting an algorithm for ship detection, offering key insights to enhance maritime safety, traffic monitoring, and autonomous navigation systems.

Keywords - ship detection, satellite imagery, Faster R-CNN, deep learning, YOLOv7

[1] S. Wu and L. Zhang, "Using Popular Object Detection Methods for Real Time Forest Fire Detection," 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 2018, pp. 280-284. https://doi.org/10.1109/ISCID.2018.00070

[2] M. Bakirci, "Evaluating the impact of unmanned aerial vehicles (UAVs) on air quality management in smart cities: A comprehensive analysis of transportation-related pollution," Computers and Electrical Engineering, vol. 119, no. 109556, 2024. https://doi.org/10.1016/j.compeleceng.2024.109556

[3] Yang, J. Zhang, G. Shi, J. Hu, Y. Wu, “Maneuver decision of UAV in short-range air combat based on deep reinforcement learning,” IEEE Access, 8, 363-378, 2019. https://doi.org/10.1109/ACCESS.2019.2961426

[4] Ma Yuehao, Lu Xiao, Dong Pei et al., "UAV fire detection based on image features", Fire Science and Technology, vol. 38, no. 5, pp. 658-660, 2019.

[5] Zheng, X., Chen, F., Lou, L., Cheng, P., & Huang, Y., “Real-Time Detection of Full-Scale Forest Fire Smoke Based on Deep Convolution Neural Network,” Remote Sensing, 14(3), 536, 2021. https://doi.org/10.3390/rs14030536

[6] M. Bakirci, I. Bayraktar, I., "Improving coastal and port management in smart cities with UAVs and deep learning," 2024 Mediterranean Smart Cities Conference (MSCC), pp. 1-6, Martil - Tetuan, Morocco, 2024. https://doi.org/10.1109/MSCC62288.2024.10697069

[7] Tan, M., Pang, R., & Le, Q. V., “Efficientdet: scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,” pp. 10781–10790, 2020.

[8] Girshick, R. B., Donahue, J., Darrell, T., & Malik, J., “Rich feature hierarchies for accurate object detection and semantic segmentation,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.

[9] Girshick, R. B., “Fast R-CNN,” In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448, 2015.

[10] Ren, S., He, K., Girshick, R. B., & Sun, J. (2015). Faster R-CNN: towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, et al. (Eds.), Proceedings of the 29th international conference on neural information processing systems (pp. 91–99). Red Hook: Curran Associates. https://doi.org/10.1109/TPAMI.2016.2577031

[11] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S. E., Fu, C.-Y., et al. (2016). SSD: single shot multibox detector. In B. Leibe, J. Matas, N. Sebe, et al. (Eds.), Proceedings of the 14th European conference on computer vision (pp. 21–37). Cham: Springer. https://doi.org/10.1007/978-3-319-46448-0_2

[12] Jeong, J., Park, H., & Kwak, N. (2017). Enhancement of SSD by concatenating feature maps for object detection. In Proceedings of the British machine vision conference (pp. 1–12). https://doi.org/10.48550/arXiv.1705.09587

[13] M. Bakirci, P. Dmytrovych, I. Bayraktar, O. Anatoliyovych, "Multi-Class Vehicle Detection and Classification with YOLO11 on UAV-Captured Aerial Imagery," 2024 IEEE 7th International Conference on Actual Problems of Unmanned Aerial Vehicles Development (APUAVD), Kyiv, Ukraine, 2024, pp. 191-196. https://doi.org/10.1109/APUAVD64488.2024.10765862

[14] Redmon, J., Divvala, S. K., Girshick, R. B., & Farhadi, A (2016). You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779–788).

[15] Song, Y., & Fu, Z. (2018). Uncertain multivariable regression model. Soft Computing, 22(17), 5861–5866. https://doi.org/10.1007/s00500-018-3324-5

[16] M. Bakirci, "A novel swarm unmanned aerial vehicle system: Incorporating autonomous flight, real-time object detection, and coordinated intelligence for enhanced performance," Traitement du Signal, vol. 40, no. 5, pp. 2063-2078, 2023. https://doi.org/10.18280/ts.400524

[17] Bochkovskiy, A., Wang, C., & Liao, H. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. ArXiv. /abs/2004.10934

[18] Wu, D., Lv, S., Jiang, M., & Song, H. (2020). Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Computers and Electronics in Agriculture, 178, 105742. https://doi.org/10.1016/j.compag.2020.105742

[19] M. Bakirci, "Real-time vehicle detection using YOLOv8-nano for intelligent transportation systems," Traitement du Signal, vol. 41, no. 4, pp. 1727-1740, 2024. https://doi.org/10.18280/ts.410407

[20] Wang, C. Y., Yeh, I. H., & Liao, H. Y. M. (2024). Yolov9: Learning what you want to learn using programmable gradient information. arXiv preprint arXiv:2402.13616.

0
Citations (Crossref)
3.9K
Total Views
39
Total Downloads

Licence Creative Commons This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
SETSCI 2025
info@set-science.com
Copyright © 2025 SETECH
Tokat Technology Development Zone Gaziosmanpaşa University Taşlıçiftlik Campus, 60240 TOKAT-TÜRKİYE