YOLO V3 AND CCN FOR THE TRACKING AND CLASSIFICATION OF AERIAL OBJECT AND DRONES

Authors:

Zainab Mohanad Issa,Layla H. Abood,Dalal Abdulmohsin,Basim Galeb,Aqeel Al-Hilali,

DOI NO:

https://doi.org/10.26782/jmcms.2024.09.00006

Keywords:

Aerial Imaging,Aeronautical Article ID,CNN,Classification,DOTA,YOLO,

Abstract

The goal of this study is to give headways in aeronautical article ID that will help with making recognitions that are both more exact and more precise. Specifically, we revamp the meaning of the article recognition anchor enclose request to remember turns for expansion to level and width, and besides, we make it conceivable to have erratic four corner point structures. Furthermore, the consideration of new anchor boxes gives the model additional adaptability to address protests that are focused at a pivot of turn that gives a 45-degree point. By accomplishing these results, we can make an organization that considers negligible tradeoffs about speed and unwavering quality, while likewise giving more exact restrictions. The latest ways to deal with PC vision and article acknowledgment are for the most part dependent on brain organizations and different advances that utilize profound learning. This powerful field of study is utilized in various applications, including military and observation, aeronautical photography, independent driving, and airborne perception. To precisely locate the location of an item, contemporary object identification techniques make use of bounding boxes that are drawn over the object and have a rectangular form (horizontal and vertical). These orthogonal bounding boxes do not consider the posture of the object, which leads to a decrease in the amount of object localization and restricts subsequent tasks such as object comprehension and tracking. We have used the DOTA dataset to present all of the results, demonstrating the value of flexible object boundaries, particularly with rotated and non-rectangular objects. We have also achieved an accuracy of 98.47% for the detection and classification of aerial objects, with forty percent of the data being used for training and the remaining twenty percent being used for testing. There was a minimum of 2.8 seconds of processing time required for the whole program to be executed to categorize all of the aerial items that were parked on the base.

Refference:

I. Ahmad, M., Khan, A. M., Mazzara, M., Distefano, S., Ali, M., & Sarfraz, M. S. (2020). A fast and compact 3-D CNN for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 19, 1-5.
II. Audebert, N.; Le Saux, B.; Lefèvre, S. Beyond RGB: Very High Resolution Urban Remote Sensing with Multimodal Deep Networks. ISPRS J. Photogramm. Remote Sens. 2018, 140, 20–32.
III. Audebert, N., Le Saux, B., & Lefèvre, S. (2019). Deep learning for classification of hyperspectral data: A comparative review. IEEE geoscience and remote sensing magazine, 7(2), 159-173.
IV. A. O’Connell, J. Smith, and A. Keane, “Distribution feeder hosting capacity analysis,” in 2017 IEEE PES Innovative Smart Grid Technologies Conference Turkey (ISGT-Turkey), Sept 2017, pp. 1–6.
V. Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434.
VI. B. G. Bai. Yancheng, “Multi-scale Fully Convolutional Network for Face Detection in the Wild,” IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2078-2087, 2017.
VII. F. Abayaje et al., “A miniaturization of the UWB monopole antenna for wireless baseband transmission,” vol. 8, no. 1, pp. 256-262, 2020.
VIII. H. A. Hussein, Y. S. Mezaal, and B. M. Alameri, “Miniaturized microstrip diplexer based on fr4 substrate for wireless communications,” Elektronika Ir Elektrotechnika, vol. 27, no. 5, pp. 34-40, 2021.
IX. Imani, M., & Ghassemian, H. (2020). An overview on spectral and spatial information fusion for hyperspectral image classification: Current trends and challenges. Information fusion, 59, 59-83.
X. J. Ali and Y. Miz’el, “A new miniature Peano fractal-based bandpass filter design with 2nd harmonic suppression 3rd IEEE International Symposium on Microwave,” Antenna, Propagation and EMC Technologies for Wireless Communications, Beijing, China, 2009.
XI. Ji, S.; Shen, Y.; Lu, M.; Zhang, Y. Building Instance Change Detection from Large-Scale Aerial Images Using Convolutional Neural Networks and Simulated Samples. Remote Sens. 2019, 11, 1343.
XII. Li, S., Song, W., Fang, L., Chen, Y., Ghamisi, P., & Benediktsson, J. A. (2019). Deep learning for hyperspectral image classification: An overview. IEEE Transactions on Geoscience and Remote Sensing, 57(9), 6690-6709.

XIII. Mezaal, Y. S., H. T. Eyyuboglu, and J. K. Ali, “A novel design of two loosely coupled bandpass filters based on Hilbert-zz resonator with higher harmonic suppression,” in 2013 Third International Conference on Advanced Computing and Communication Technologies (ACCT), 2013.
XIV. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177.
XV. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657.
XVI. S. Roshani et al., “Design of a compact quad-channel microstrip diplexer for L and S band applications,” Micromachines, vol. 14, no. 3, p. 553, 2023.
XVII. S. Roshani, S. I. Yahya, B. M. Alameri, Y. S. Mezaal, L. W. Liu, and S. Roshani, “Filtering power divider design using resonant LC branches for 5G low-band applications,” Sustainability, vol. 14, no. 19, p. 12291, 2022.
XVIII. S. I. Yahya et al., “A New Design Method for Class-E Power Amplifiers Using Artificial Intelligence Modeling for Wireless Power Transfer Applications,” Electronics, vol. 11, no. 21, p. 3608, 2022.
XIX. S. A. AbdulAmeer et al., “Cyber Security Readiness in Iraq: Role of the Human Rights Activists,” International Journal of Cyber Criminology, vol. 16, no. 2, pp. 1–14-1–14, 2022.
XX. Tarrad , K. M. et al., “Cybercrime Challenges in Iraqi Academia: Creating Digital Awareness for Preventing Cybercrimes,” International Journal of Cyber Criminology, vol. 16, no. 2, pp. 15–31-15–31, 2022.
XXI. Shareef , M. S. et al., “Cloud of Things and fog computing in Iraq: Potential applications and sustainability”, Heritage and Sustainable Development, vol. 5, no. 2, pp. 339–350, Nov. 2023.
XXII. Shareef , M. S., T. Abd, and Y. S. Mezaal, “Gender voice classification with huge accuracy rate,” TELKOMNIKA, vol. 18, no. 5, p. 2612, 2020.
XXIII. Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters. Remote Sens. 2018, 10, 144.
XXIV. Y. S. Mezaal and S. F. Abdulkareem, “New microstrip antenna based on quasi-fractal geometry for recent wireless systems,” in 2018 26th Signal Processing and Communications Applications Conference (SIU), 2018: IEEE, pp. 1-4.
XXV. Y. S. Mezaal, H. H. Saleh, and H. Al-Saedi, “New compact microstrip filters based on quasi fractal resonator,” Advanced Electromagnetics, vol. 7, no. 4, pp. 93-102, 2018.
XXVI. Y. S. Mezaal, D. A. Hammood, and M. H. Ali, “OTP encryption enhancement based on logical operations,” in 2016 Sixth International Conference on Digital Information Processing and Communications (ICDIPC), 2016.
XXVII. Zhang, S.; Wu, R.; Xu, K.; Wang, J.; Sun, W. R-CNN-Based Ship Detection from High Resolution Remote Sensing Imagery. Remote Sens. 2019, 11, 631.
XXVIII. Zhao, W.; Du, S.; Emery, W.J. Object-Based Convolutional Neural Network for High-Resolution Imagery Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3386–3396.
XXIX. Zhao, C., Qin, B., Feng, S., Zhu, W., Sun, W., Li, W., & Jia, X. (2023). Hyperspectral image classification with multi-attention transformer and adaptive superpixel segmentation-based active learning. IEEE Transactions on Image Processing.

View Download