Claim Missing Document
Check
Articles

Found 2 Documents
Search

A Systematic Literature Review: Limitation of Video Conference Rezki Yunanda; Ian Jeremiah Cahyadi; Bryan Theophilllus; Jason Oei; Ivan Sebastian Edbert; Alvina Aulia
Engineering, MAthematics and Computer Science (EMACS) Journal Vol. 4 No. 3 (2022): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v4i3.8635

Abstract

The Covid-19 pandemic that has hit the world has resulted in limited human activities. Video conferencing is one solution for carrying out activities. Video conferencing helps people reduce the spread of Covid-19 and connects people. With video conferencing, people can meet without being limited by space and time. However, video conferencing still has various drawbacks, such as the inability to interact. Because we only see the video, the lack of technology due to video conferencing requires us to use technology such as computers or cell phones that a stable and fast internet network must support to ensure smooth video conferencing. In this paper, the researcher conducted a literature study to determine the limitations of video conferencing. The results show the various reasons for video conferencing restrictions. Some of these reasons are experienced by lecturers, students, or the lack of infrastructure. Students are challenged to improve their level of study due to the lack of technological support that supports only using video conferencing.
Deep Transfer Learning for Sign Language Image Classification: A Bisindo Dataset Study Ika Dyah Agustia Rachmawati; Rezki Yunanda; Muhammad Fadlan Hidayat; Pandu Wicaksono
Engineering, MAthematics and Computer Science Journal (EMACS) Vol. 5 No. 3 (2023): EMACS
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/emacsjournal.v5i3.10621

Abstract

This study aims to identify and categorize the BISINDO sign language dataset, primarily consisting of image data. Deep learning techniques are used, with three pre-trained models: ResNet50 for training, MobileNetV4 for validation, and InceptionV3 for testing. The primary objective is to evaluate and compare the performance of each model based on the loss function derived during training. The training success rate provides a rough idea of the ResNet50 model's understanding of the BISINDO dataset, while MobileNetV4 measures validation loss to understand the model's generalization abilities. The InceptionV3-evaluated test loss serves as the ultimate litmus test for the model's performance, evaluating its ability to classify unobserved sign language images. The results of these exhaustive experiments will determine the most effective model and achieve the highest performance in sign language recognition using the BISINDO dataset.