Claim Missing Document
Check
Articles

Found 2 Documents
Search

Safe-Deposit Box Using Fingerprint and Blynk Yulianto Yulianto; Budi Juarto; Ika Dyah Agustia Rachmawati; Risma Yulistiani
Engineering, MAthematics and Computer Science (EMACS) Journal Vol. 4 No. 1 (2022): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v4i1.8080

Abstract

The criminal act of robbery really makes people nervous, especially in urban areas. There are many ways that can be done to avoid robbery at home and office, such as increasing the security system in the house to protect valuables. Safe-deposit boxes are items that are used to store valuables. Safe-deposit box is used to prevent against theft who want to take valuable things. To increase security, technology has begun to develop for security in various ways, such as fingerprints, passwords, and buzzers. This research will focus on a safe security system using a fingerprint that is connected to the internet with the Blynk application so that the user will get a safe notification when the servo condition is open or closed. The fingerprint sensor is an access to open doors, the Arduino Uno microcontroller is a storage for command logic on the system, the stepper motor acts as an activator for opening and closing servo and the Esp8266 module as a Wi-Fi module that connects equipment components using the internet network with the Blynk application which is used as distance control and notification of incoming access to homes with the concept of Internet of Things (IoT).
Deep Transfer Learning for Sign Language Image Classification: A Bisindo Dataset Study Ika Dyah Agustia Rachmawati; Rezki Yunanda; Muhammad Fadlan Hidayat; Pandu Wicaksono
Engineering, MAthematics and Computer Science Journal (EMACS) Vol. 5 No. 3 (2023): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v5i3.10621

Abstract

This study aims to identify and categorize the BISINDO sign language dataset, primarily consisting of image data. Deep learning techniques are used, with three pre-trained models: ResNet50 for training, MobileNetV4 for validation, and InceptionV3 for testing. The primary objective is to evaluate and compare the performance of each model based on the loss function derived during training. The training success rate provides a rough idea of the ResNet50 model's understanding of the BISINDO dataset, while MobileNetV4 measures validation loss to understand the model's generalization abilities. The InceptionV3-evaluated test loss serves as the ultimate litmus test for the model's performance, evaluating its ability to classify unobserved sign language images. The results of these exhaustive experiments will determine the most effective model and achieve the highest performance in sign language recognition using the BISINDO dataset.