Claim Missing Document
Check
Articles

Found 2 Documents
Search

Forward feature selection for toxic speech classification using support vector machine and random forest Agustinus Bimo Gumelar; Astri Yogatama; Derry Pramono Adi; Frismanda Frismanda; Indar Sugiarto
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 11, No 2: June 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v11.i2.pp717-726

Abstract

This study describes the methods for eliminating irrelevant features in speech data to enhance toxic speech classification accuracy and reduce the complexity of the learning process. Therefore, the wrapper method is introduced to estimate the forward selection technique based on support vector machine (SVM) and random forest (RF) classifier algorithms. Eight main speech features were then extracted with derivatives consisting of 9 statistical sub-features from 72 features in the extraction process. Furthermore, Python is used to implement the classifier algorithm of 2,000 toxic data collected through the world's largest video sharing media, known as YouTube. Conclusively, this experiment shows that after the feature selection process, the classification performance using SVM and RF algorithms increases to an excellent extent. We were able to select 10 speech features out of 72 original feature sets using the forward feature selection method, with 99.5% classification accuracy using RF and 99.2% using SVM.
Kombinasi Fitur Multispektrum Hilbert dan Cochleagram untuk Identifikasi Emosi Wicara Agustinus Bimo Gumelar; Eko Mulyanto Yuniarno; Wiwik Anggraeni; Indar Sugiarto; Andreas Agung Kristanto; Mauridhi Hery Purnomo
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 9 No 2: Mei 2020
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1364.227 KB) | DOI: 10.22146/jnteti.v9i2.166

Abstract

In social behavior of human interaction, human voice becomes one of the means of channeling mental states' emotional expression. Human voice is a vocal-processesed speech, arranged with word sequences, producing the speech pattern which able to channel the speakers' psychological condition. This pattern provides special characteristics that can be developed along with biometric identification process. Spectrum image visualization techniques are employed to sufficiently represent speech signal. This study aims to identify the emotion types in the human voice using a feature combination multi-spectrum Hilbert and cochleagram. The Hilbert spectrum represents the Hilbert-Huang Transformation(HHT)results for processing a non-linear, non-stationary instantaneous speech emotional signals with intrinsic mode functions. Through imitating the functions of the outer and middle ear elements, emotional speech impulses are broken down into frequencies that typically vary from the effects of their expression in the form of the cochlea continuum. The two inputs in the form of speech spectrum are processed using Convolutional Neural Networks(CNN) which best known for recognizing image data because it represents the mechanism of human retina and also Long Short-Term Memory(LSTM)method. Based on the results of this experiments using three public datasets of speech emotions, which each of them has similar eight emotional classes, this experiment obtained an accuracy of 90.97% with CNN and 80.62% with LSTM.