Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : CommIT (Communication

Fish Classification System Using YOLOv3-ResNet18 Model for Mobile Phones Suryadiputra Liawatimena; Edi Abdurachman; Agung Trisetyarso; Antoni Wibowo; Muhamad Keenan Ario; Ivan Sebastian Edbert
CommIT (Communication and Information Technology) Journal Vol. 17 No. 1 (2023): CommIT Journal
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/commit.v17i1.8107

Abstract

Every country in the world needs to report its fish production to the Food and Agriculture Organization of the United Nations (FAO) every year. In 2018, Indonesia ranked top five countries in fish production, with 8 million tons globally. Although it ranks top five, the fisheries in Indonesia are mostly dominated by traditional and small industries. Hence, a solution based on computer vision is needed to help detect and classify the fish caught every year. The research presents a method to detect and classify fish on mobile devices using the YOLOv3 model combined with ResNet18 as a backbone. For the experiment, the dataset used is four types of fish gathered from scraping across the Internet and taken from local markets and harbors with a total of 4,000 images. In comparison, two models are used: SSD-VGG and autogenerated model Huawei ExeML. The results show that the YOLOv3-ResNet18 model produces 98.45% accuracy in training and 98.15% in evaluation. The model is also tested on mobile devices and produces a speed of 2,115 ms on Huawei P40 and 3,571 ms on Realme 7. It can be concluded that the research presents a smaller-size model which is suitable for mobile devices while maintaining good accuracy and precision.
End-to-End Steering Angle Prediction for Autonomous Car Using Vision Transformer Ilvico Sonata; Yaya Heryadi; Antoni Wibowo; Widodo Budiharto
CommIT (Communication and Information Technology) Journal Vol. 17 No. 2 (2023): CommIT Journal
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/commit.v17i2.8425

Abstract

The development of autonomous cars is currently increasing along with the need for safe and comfortable autonomous cars. The development of autonomous cars cannot be separated from the use of deep learning to determine the steering angle of an autonomous car according to the road conditions it faces. In the research, a Vision Transformer (ViT) model is proposed to determine the steering angle based on images taken using a front-facing camera on an autonomous car. The dataset used to train ViT is a public dataset. The dataset is taken from streets around Rancho Palos Verdes and San Pedro, California. The number of images is 45,560, which are labeled with the steering angle value for each image. The proposed model can predict steering angle well. Then, the steering angle prediction results are compared using the same dataset with existing models. The experimental results show that the proposed model has better accuracy regarding the resulting MSE value of 2,991 compared to the CNN-based model of 5,358 and the CNN-LSTM combination model of 4,065. From the results of this experiment, the ViT model can replace the existing model, namely the CNN model and the combination model between CNN and LSTM, in predicting the steering angle of an autonomous car.