Claim Missing Document

Found 6 Documents

Adaptive Background Extraction for Video Based Traffic Counter Application Using Gaussian Mixture Models Algorithm Raymond Sutjiadi; Endang Setyati; Resmana Lim
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 13, No 3: September 2015
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v13i3.1772


The big cities in the world always face the traffic jam. This problem is caused by the increasing number of vehicle from time to time and the increase of vehicle is not anticipated with the development of new road section that is adequate. One important aspect in the traffic management concept is the need of traffic density data of every road section. Therefore, the purpose of this paper is to analyze the possibility of optimization on the use of video file recorded from CCTV camera for the visual observation and the tool for counting traffic density. The used method in this paper is adaptive background extraction with Gaussian Mixture Models algorithm. It is expected to be the alternative solution to get the data of traffic density with a quite adequate accuracy as one of aspects for decision making process in the traffic engineering
Digit Classification of Majapahit Relic Inscription using GLCM-SVM Tri Septianto; Endang Setyati; Joan Santoso
Knowledge Engineering and Data Science Vol 1, No 2 (2018)
Publisher : Universitas Negeri Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1094.878 KB) | DOI: 10.17977/um018v1i22018p46-54


A higher level of image processing usually contains some kind of classification or recognition. Digit classification is an important subfield in handwritten recognition. Handwritten digits are characterized by large variations so template matching, in general, is inefficient and low in accuracy. In this paper, we propose the classification of the digit of the year of a relic inscription in the Kingdom of Majapahit using Support Vector Machine (SVM). This method is able to cope with very large feature dimensions and without reducing existing features extraction. While the method used for feature extraction using the Gray-Level Co-Occurrence Matrix (GLCM), special for texture analysis. This experiment is divided into 10 classification class, namely: class 1, 2, 3, 4, 5, 6, 7, 8, 9, and class 0. Each class is tested with 10 data so that the whole data testing are 100 data number year. The use of GLCM and SVM methods have obtained an average of classification results about 77 %.
Jurnal Ilmiah Kursor Vol 8 No 3 (2016)
Publisher : Universitas Trunojoyo Madura

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28961/kursor.v8i3.61


In a communication using texts input, viseme (visual phonemes) is derived from a group of phonemes having similar visual appearances. Hidden Markov model (HMM) has been a popular mathematical approach for sequence classification such as speech recognition. For speech emotion recognition, a HMM is trained for each emotion and an unknown sample is classified according to the model which illustrate the derived feature sequence best. Viterbi algorithm, HMM is used for guessing the most possible state sequence of observable states. In this work, first stage, we defined system of an Indonesian viseme set and the associated mouth shapes, namely system of text input segmentation. The second stage, we defined a choice of one of affection type as input in the system. The last stage, we experimentally using Trigram HMMs for generating the viseme sequence to be used for synchronized mouth shape and lip movements. The whole system is interconnected in a sequence. The final system produced a viseme sequence for natural speech of Indonesian sentences with affection. We show through various experiments that the proposed, the results in about 82,19% relative improvement in classification accuracy.
Utilization Of Augmented Reality In Automotive Subjects For Basic Competencies Of Four-Wheeled Vehicle Brake Systems Muhammad Farkhan; Endang Setyati; Francisca Haryanti Chandra
BEST Vol 3 No 2 (2021): BEST
Publisher : Program Studi Teknik Elektro Universitas PGRI Adi Buana Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36456/best.vol3.no2.4243


In automotive learning, teachers generally use books and teaching aids as learning media. Automotive learning outcomes show the low value of learning outcomes. Thus a learning media is needed that can help improve learning outcomes. One way to overcome this problem is to use learning media that utilize augmented reality technology. In this study, a learning media using augmented reality technology based on android was developed to simulate the brake system on four-wheeled vehicles in 3 dimensions. The Augmented Reality work system used is marker based tracking, and uses 3D Max software and the Vuforia plug-in. In terms of pedagogy, this learning system uses the Modality Principle. Participants are class XI students of SMK YPM 4 Taman. This research uses experimental research. The students involved were 44 students divided into 2 groups, with each group consisting of 22 students. Both groups received a pre-test and a post-test. The experimental group was given treatment with Augmented Reality-based learning media, while the control group did not use conventional learning media. After making comparisons, the results show less than optimal due to the pandemic period. The results showed that the pre-test result between the control group and the experimental group was 49.32, and the post-test result for the control group was 62.73, while for the experimental group it was 73.18. So that from the difference in the difference in post-test scores between the experimental group and the control group shows that the treatment factor by providing Augmented Reality-based learning media in the experimental group has an influence. From observations and interviews, students were more active in learning activities and students were eager to take part in learning. This proves that students are interested in this media which can generate motivation to learn.
Model CNN LeNet dalam Rekognisi Angka Tahun pada Prasasti Peninggalan Kerajaan Majapahit Tri Septianto; Endang Setyati; Joan Santoso
Jurnal Teknologi dan Sistem Komputer Volume 6, Issue 3, Year 2018 (July 2018)
Publisher : Department of Computer Engineering, Engineering Faculty, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (256.306 KB) | DOI: 10.14710/jtsiskom.6.3.2018.106-109


The object of the inscription has a feature that is difficult to recognize because it is generally eroded and faded. This study analyzed the performance of CNN using LeNet model to recognize the object of year digit found on the relic inscriptions of Majapahit Kingdom. Object recognition with LeNet model had a maximum accuracy of 85.08% at 10 epoch in 6069 seconds. This LeNet's performance was better than the VGG as the comparison model with a maximum accuracy of 11.39% at 10 epoch in 40223 seconds.
Extraction of Eye and Mouth Features for Drowsiness Face Detection Using Neural Network Elis Fitrianingsih; Endang Setyati; Luqman Zaman
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol 3, No 2, May-2018
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (626.909 KB) | DOI: 10.22219/kinetik.v3i2.589


Facial feature extraction is the process of searching for features of facial components such as eyes, nose, mouth and other parts of human facial features. Facial feature extraction is essential for initializing processing techniques such as face tracking, facial expression recognition or face shape recognition. Among all facial features, eye area detection is important because of the detection and localization of the eye. The location of all other facial features can be identified. This study describes automated algorithms for feature extraction of eyes and mouth. The data takes form of video, then converted into a sequence of images through frame extraction process. From the sequence of images, feature extraction is based on the morphology of the eyes and mouth using Neural Network Backpropagation method. After feature extraction of the eye and mouth is completed, the result of the feature extraction will later be used to detect a person’s drowsiness, being useful for other research.