Harvianto Harvianto
Bina Nusantara University

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Wavelet-Based Color Histogram on Content-Based Image Retrieval Alexander Alexander; Jeklin Harefa; Yudy Purnama; Harvianto Harvianto
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 16, No 3: June 2018
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12928/telkomnika.v16i3.7771

Abstract

The growth of image databases in many domains, including fashion, biometric, graphic design, architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used for finding relevant images from those huge and unannotated image databases based on low-level features of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH) on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision (0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix, and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Analysis And Voice Recognition In Indonesian Language Using MFCC And SVM Method Harvianto Harvianto; Livia Ashianti; Jupiter Jupiter; Suhandi Junaedi
ComTech: Computer, Mathematics and Engineering Applications Vol. 7 No. 2 (2016): ComTech
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/comtech.v7i2.2252

Abstract

Voice recognition technology is one of biometric technology. Sound is a unique part of the human being which made an individual can be easily distinguished one from another. Voice can also provide information such as gender, emotion, and identity of the speaker. This research will record human voices that pronounce digits between 0 and 9 with and without noise. Features of this sound recording will be extracted using Mel Frequency Cepstral Coefficient (MFCC). Mean, standard deviation, max, min, and the combination of them will be used to construct the feature vectors. This feature vectors then will be classified using Support Vector Machine (SVM). There will be two classification models. The first one is based on the speaker and the other one based on the digits pronounced. The classification model then will be validated by performing 10-fold cross-validation.The best average accuracy from two classification model is 91.83%. This result achieved using Mean + Standard deviation + Min + Max as features.