Anton Satria Prabuwono
King Abdulaziz University

Published : 5 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 5 Documents
Search

Galvanic Skin Response Data Classification for Emotion Detection Djoko Budiyanto Setyohadi; Sri Kusrohmaniah; Sebastian Bagya Gunawan; Pranowo Pranowo; Anton Satria Prabuwono
International Journal of Electrical and Computer Engineering (IJECE) Vol 8, No 5: October 2018
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (617.148 KB) | DOI: 10.11591/ijece.v8i5.pp4004-4014

Abstract

Emotion detection is a very exhausting job and needs a complicated process; moreover, these processes also require the proper data training and appropriate algorithm. The process involves the experimental research in psychological experiment and classification methods. This paper describes a method on detection emotion using Galvanic Skin Response (GSR) data. We used the Positive and Negative Affect Schedule (PANAS) method to get a good data training. Furthermore, Support Vector Machine and a correct preprocessing are performed to classify the GSR data. To validate the proposed approach, Receiver Operating Characteristic (ROC) curve, and accuracy measurement are used. Our method shows that the accuracy is about 75.65% while ROC is about 0.8019. It means that the emotion detection can be done satisfactorily and well performed.
Assistive Robotic Technology: A Review Anton Satria Prabuwono; Khalid Hammed S. Allehaibi; Kurnianingsih Kurnianingsih
Computer Engineering and Applications Journal Vol 6 No 2 (2017)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (294.089 KB) | DOI: 10.18495/comengapp.v6i2.203

Abstract

Older people with chronic conditions even lead to some disabilities face many challenges in performing daily life. Assistive robot is considered as a tool to provide companionship and assist daily life of older people and disabled people. This paper presents a review of assistive robotic technology, particularly for older people and disabled people. The result of this review constitutes a step towards the development of assistive robots capable of helping some problems of older people and disabled people. Hence, they may remain in at home and live independently.
Applicationof Computer Visionfor Polishing RobotinAutomotive Manufacturing Industries Adnan Rachmat Anom Besari; Ruzaidi Zamri; Md. Dan Md. Palil; Anton Satria Prabuwono
EMITTER International Journal of Engineering Technology Vol 2 No 2 (2014)
Publisher : Politeknik Elektronika Negeri Surabaya (PENS)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24003/emitter.v2i2.22

Abstract

Polishing is a highly skilled manufacturing process with a lot of constraints and interaction with environment. In general, the purpose of polishing is to get the uniform surface roughness distributed evenly throughout part’s surface. In order to reduce the polishing time and cope with the shortage of skilled workers, robotic polishing technology has been investigated. This paper studies about vision system to measure surface defects that have been characterized to some level of surface roughness. The surface defects data have learned using artificial neural networks to give a decision in order to move the actuator of arm robot. Force and rotation time have chosen as output parameters of artificial neural networks. Results shows that although there is a considerable change in both parameter values acquired from vision data compared to real data, it is still possible to obtain surface defects characterization using vision sensor to a certain limit of accuracy. The overall results of this research would encourage further developments in this area to achieve robust computer vision based surface measurement systems for industrial robotic, especially in polishing process.Keywords: polishing robot, vision sensor, surface defects, and artificial neural networks
NoonGil Lens+: Second Level Face Recognition from Detected Objects to Decrease Computation and Performance Trade-off Jo Vianto; Djoko Budiyanto Setyohadi; Anton Satria Prabuwono; Mohd Sanusi Azmi; Eddy Julianto
Indonesian Journal of Information Systems Vol. 4 No. 2 (2022): February 2022
Publisher : Program Studi Sistem Informasi Universitas Atma Jaya Yogyakarta

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24002/ijis.v4i2.5488

Abstract

Artificial intelligence has developed in various fields. The development became more significant after Neural Networks(NN) began to gain popularity. Convolutional Neural Networks(CNNs) are good at solving problems such as classification and object detection. However, the CNNs model tends to function to solve a specific problem. In the case of both object detection and face recognition it is difficult to make a single model that works well. NoonGil Lens+ is expected to be an approach that can solve both problems at once. As well as being a solution, it is also hoped that this approach can reduce the trade-off of accuracy and execution speed. The approach we propose can be called as Noongil Lens+, a system that connects YOLOv3 and FaceNet. It is inspired from a korean series called ‘STARTUP’. The author only develops the FaceNet model and the proposed system in this paper (NoonGil Lens+). Region Selection, a machine learning-based greedy approach was proposed to determine snapshots to fed into FaceNet for facial identity classification. FaceNet is trained on the CelebA dataset which has gone through the preprocessing process and is validated using the LFW dataset. NoonGil Lens+ was validated using 70 images of 7 celebrities, characters, and athletes. In general, the research was carried out successfully. NoonGil Lens+ using Region Selection has an accuracy of up to 75.2%. The Region Selection execution speed is also faster compared to Cascade Faces.
Nondestructive Chicken Egg Fertility Detection Using CNN-Transfer Learning Algorithms Shoffan Saifullah; Rafal Drezewski; Anton Yudhana; Andri Pranolo; Wilis Kaswijanti; Andiko Putro Suryotomo; Seno Aji Putra; Alin Khaliduzzaman; Anton Satria Prabuwono; Nathalie Japkowicz
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 3 (2023): September
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i3.26722

Abstract

This study explores the application of CNN-Transfer Learning for nondestructive chicken egg fertility detection. Four models, VGG16, ResNet50, InceptionNet, and MobileNet, were trained and evaluated on a dataset using augmented images. The training results demonstrated that all models achieved high accuracy, indicating their ability to accurately learn and classify chicken eggs’ fertility state. However, when evaluated on the testing set, variations in accuracy and performance were observed. VGG16 achieved a high accuracy of 0.9803 on the testing set but had challenges in accurately detecting fertile eggs, as indicated by a NaN sensitivity value. ResNet50 also achieved an accuracy of 0.98 but struggled to identify fertile and non-fertile eggs, as suggested by NaN values for sensitivity and specificity. However, InceptionNet demonstrated excellent performance, with an accuracy of 0.9804, a sensitivity of 1 for detecting fertile eggs, and a specificity of 0.9615 for identifying non-fertile eggs. MobileNet achieved an accuracy of 0.9804 on the testing set; however, it faced challenges in accurately classifying the fertility status of chicken eggs, as indicated by NaN values for both sensitivity and specificity. While the models showed promise during training, variations in accuracy and performance were observed during testing. InceptionNet exhibited the best overall performance, accurately classifying fertile and non-fertile eggs. Further optimization and fine-tuning of the models are necessary to address the limitations in accurately detecting fertile and non-fertile eggs. This study highlights the potential of CNN-Transfer Learning for nondestructive fertility detection and emphasizes the need for further research to enhance the models’ capabilities and ensure accurate classification.