Eko Wahyu Prasetyo
Universitas Merdeka Malang

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Spatial Based Deep Learning Autonomous Wheel Robot Using CNN Eko Wahyu Prasetyo; Nambo Hidetaka; Dwi Arman Prasetya; Wahyu Dirgantara; Hari Fitria Windi
Lontar Komputer : Jurnal Ilmiah Teknologi Informasi Vol 11 No 3 (2020): Vol. 11, No. 03 December 2020
Publisher : Institute for Research and Community Services, Udayana University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24843/LKJITI.2020.v11.i03.p05

Abstract

The development of technology is growing rapidly; one of the most popular among the scientist is robotics technology. Recently, the robot was created to resemble the function of the human brain. Robots can make decisions without being helped by humans, known as AI (Artificial Intelligent). Now, this technology is being developed so that it can be used in wheeled vehicles, where these vehicles can run without any obstacles. Furthermore, of research, Nvidia introduced an autonomous vehicle named Nvidia Dave-2, which became popular. It showed an accuracy rate of 90%. The CNN (Convolutional Neural Network) method is used in the track recognition process with input in the form of a trajectory that has been taken from several angles. The data is trained using Jupiter's notebook, and then the training results can be used to automate the movement of the robot on the track where the data has been retrieved. The results obtained are then used by the robot to determine the path it will take. Many images that are taken as data, precise the results will be, but the time to train the image data will also be longer. From the data that has been obtained, the highest train loss on the first epoch is 1.829455, and the highest test loss on the third epoch is 30.90127. This indicates better steering control, which means better stability.
Fine-Tuned RetinaNet for Real-Time Lettuce Detection Eko Wahyu Prasetyo; Hidetaka Nambo
Lontar Komputer : Jurnal Ilmiah Teknologi Informasi Vol 15 No 1 (2024): Vol. 15, No. 1 April 2024
Publisher : Institute for Research and Community Services, Udayana University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24843/LKJITI.2024.v15.i01.p02

Abstract

The agricultural industry plays a vital role in the global demand for food production. Along with population growth, there is an increasing need for efficient farming practices that can maximize crop yields. Conventional methods of harvesting lettuce often rely on manual labor, which can be time-consuming, labor-intensive, and prone to human error. These challenges lead to research into automation technology, such as robotics, to improve harvest efficiency and reduce reliance on human intervention. Deep learning-based object detection models have shown impressive success in various computer vision tasks, such as object recognition. RetinaNet model can be trained to identify and localize lettuce accurately. However, the pre-trained models must be fine-tuned to adapt to the specific characteristics of lettuce, such as shape, size, and occlusion, to deploy object recognition models in real-world agricultural scenarios. Fine-tuning the models using lettuce-specific datasets can improve their accuracy and robustness for detecting and localizing lettuce. The data acquired for RetinaNet has the highest accuracy of 0.782, recall of 0.844, f1-score of 0.875, and mAP of 0,962. Metrics evaluate that the higher the score, the better the model performs.