Claim Missing Document
Check
Articles

Improving multi-class EEG-motor imagery classification using two-stage detection on one-versus-one approach Adi Wijaya; Teguh Bharata Adji; Noor Akhmad Setiawan
Communications in Science and Technology Vol 5 No 2 (2020)
Publisher : Komunitas Ilmuwan dan Profesional Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21924/cst.5.2.2020.216

Abstract

The multi-class motor imagery based on Electroencephalogram (EEG) signals in Brain-Computer Interface (BCI) systems still face challenges, such as inconsistent accuracy and low classification performance due to inter-subject dependent. Therefore, this study aims to improve multi-class EEG-motor imagery using two-stage detection and voting scheme on one-versus-one approach. The EEG signal used to carry out this research was extracted through a statistical measure of narrow window sliding. Furthermore, inter and cross-subject schemes were investigated on BCI competition IV-Dataset 2a to evaluate the effectiveness of the proposed method. The experimental results showed that the proposed method produced enhanced inter and cross-subject kappa coefficient values of 0.78 and 0.68, respectively, with a low standard deviation of 0.1 for both schemes. These results further indicated that the proposed method has an ability to address inter-subject dependent for promising and reliable BCI systems.
Pengaruh Load Balancing Pada Pemrosesan Paralel untuk Kompresi Video Sudaryanto .; Teguh Bharata Adji; Hanung Adi Nugroho
SENATIK STT Adisutjipto Vol 2 (2016): Peran Teknologi dan Kedirgantaraan Untuk Meningkatkan Daya Saing Bangsa
Publisher : Institut Teknologi Dirgantara Adisutjipto

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28989/senatik.v2i0.85

Abstract

Communication and multimedia, especially video data processing require very high resource both computing resources and communication traffic. This requires high-end machines such as servers with high specifications are of course very expensive. This results builds a web based application that implements the concept of parallel processing with load balancing process based CPU Usage to compress video files with FFmpeg software.The results are conditioned compression has half the resolution of the original video data. Based on the test results indicate with load balancing process parallel concepts used, the compression process showed an average speed up value of 8.07% faster than paralle Non load balancing process with 2 compressors, 37.57% with 3 compressors, and 41.24% with 4 compressors. The level of processor efficiency by 28.76% more efficient than paralle Non load balancing process with 2 compressors, 37.57% with 3 compressor, and 41.24%  with 4 compressors. Keywords: pemrosesan paralel, kompresi video, Load Balancing, CPU Usage
Data Benchmark pada Google BigQuery dan Elasticsearch Nisrina Akbar Rizky Putri; Widyawan; Teguh Bharata Adji
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 10 No 3: Agustus 2021
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1334.058 KB) | DOI: 10.22146/jnteti.v10i3.1745

Abstract

Nowadays,the cloud is not only a data storage medium but can be used as a medium for managing or analyzing data. Google offers Google BigQueryas a platform capable of managing and analyzing data,while Elasticsearch itself is a search and analysis engine that can be used to analyze data using Kibana. Using a dataset in the form of tweets crawled through http://netlytic.org/,containing the hashtags #COVID19 and #coronavirus, the data will be analyzed and used to compare its performance with benchmarks. Benchmark is a process used to measure and compare performance against an activity so that the desired level of performance is achieved. Data benchmark is performed on both platforms to generate or determine the workload of the platforms. The result obtained in this study is that Google BigQueryhas superior results, both from the upload container for larger datasets than Elasticsearch and with two query testing models.The query management time on Google BigQueryis also shorter and faster than Elasticsearch. Meanwhile, the visualization results from these two platforms have the same percentage amount.
Aspect Category Classification dengan Pendekatan Machine Learning Menggunakan Dataset Bahasa Indonesia SYAIFULLOH AMIEN PANDEGA PERDANA; Teguh Bharata Aji; Ridi Ferdiana
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 10 No 3: Agustus 2021
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1241.472 KB) | DOI: 10.22146/jnteti.v10i3.1819

Abstract

Customer reviews are opinions on the quality of goods or services that consumers perceive. Customer reviews contain useful information for both consumers and providers of goods or services. The availability of a large number of customer reviews on the websiterequires a framework for extracting sentiment automatically. A customer review often contains many aspects, so the Aspect Based Sentiment Analysis (ABSA) should be used to determine the polarity of each aspect. One of the important tasks in ABSA is Aspect Category Detection. The application of Machine Learning Methods for Aspect Category Detection has been mostly done in the English language domain, but in the Indonesian language domain,there are still a few. This study compares the performance of three machine learning algorithms, namely Naïve Bayes (NB), Support Vector Machine (SVM),and Random Forest (RF),on Indonesian language customer reviews using Term Frequency-Inverse Document Frequency (TF-IDF) as term weighting. The results showthat RFperformsthe best,compared to NB and SVM,in three different domains, namely restaurants, hotels,and e-commerce,with the f1-scoresfor each domainare84.3%, 85.7%, and 89.3%.
Point of Interest (POI) Recommendation System using Implicit Feedback Based on K-Means+ Clustering and User-Based Collaborative Filtering Sulis Setiowati; Teguh Bharata Adji; Igi Ardiyanto
Computer Engineering and Applications Journal Vol 11 No 2 (2022)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (499.913 KB) | DOI: 10.18495/comengapp.v11i2.399

Abstract

Recommendation system always involves huge volumes of data, therefore it causes the scalability issues that do not only increase the processing time but also reduce the accuracy. In addition, the type of data used also greatly affects the result of the recommendations. In the recommendation system, there are two common types of data namely implicit (binary) rating and explicit (scalar) rating. Binary rating produces lower accuracy when it is not handled with the properly. Thus, optimized K-Means+ clustering and user-based collaborative filtering are proposed in this research. The K-Means clustering is optimized by selecting the K value using the Davies-Bouldin Index (DBI) method. The experimental result shows that the optimization of the K values produces better clustering than Elbow Method. The K-Means+ and User-Based Collaborative Filtering (UBCF) produce precision of 8.6% and f-measure of 7.2%, respectively. The proposed method was compared to DBSCAN algorithm with UBCF, and had better accuracy of 1% increase in precision value. This result proves that K-Means+ with UBCF can handle implicit feedback datasets and improve precision.
Peranan Kontur dan Slope dalam Pengenalan Keaslian Tanda Tangan Menggunakan Dynamic Time Warping dan Polar Fourier Transform Ignatia Dhian Estu Karisma Ratri; Hanung Adi Nugroho; Teguh Bharata Adji
Jurnal Informatika Vol 12, No 2 (2016): Jurnal Teknologi Komputer dan Informatika
Publisher : Universitas Kristen Duta Wacana

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (476.563 KB) | DOI: 10.21460/inf.2016.122.495

Abstract

The writer has seen that so far signatures are just validated manually, so there is possibility to create a system for hand signature recognition.  The objective of this research is to improve the method for hand signature recognition using combination method with different characteristic. Contour and slope used for special feature in this research. Contour and slope from image will be applied using Dynamic Time Warping (DTW). Another extraction feature that been used was Polar Fourier Transform (PFT).   The method employed for classification are Support Vector Machine (SVM).From the research results, the writer obtains the fact that the combination between the DTW and PFT using SVM classification, provide the better results in verification of an authentic hand signature with the accuracy of 93.23%.  it is expected that from this research, the results can be utilized in the process of verification of an authentic hand signature in near future dailylife.
Performance Improvement Using CNN for Sentiment Analysis Moch. Ari Nasichuddin; Teguh Bharata Adji; Widyawan Widyawan
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 2, No 1 (2018): March 2018
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1136.606 KB) | DOI: 10.22146/ijitee.36642

Abstract

The approach using Deep Learning method provides great results in various field implementations, especially in the field of Sentiment Analysis. One of Deep Learning methods is CNN which has the ability to provide great accuracy in some previous research. However, there are some parts of the training process which can be improved to upgrade the accuracy level and the training time. In this paper, we try to improve the accuracy and processing time of sentiment analysis using CNN model. By tuning the filter size, frameworks, and pre-training, the results show that the use of smaller filter size and pre-training word2vec provide greater accuracy than some previous studies.
Study of Undersampling Method: Instance Hardness Threshold with Various Estimators for Hate Speech Classification Naufal Azmi Verdikha; Teguh Bharata Adji; Adhistya Erna Permanasari
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 2, No 2 (2018): June 2018
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (969.833 KB) | DOI: 10.22146/ijitee.42152

Abstract

A text classification system is needed to address the problem of hate speech in social media. However, texts of hate speech are very hard to find in social media. This will make the distribution of training data to be unbalanced (imbalanced data). Classification with imbalanced data will make a poor performance. There are several methods to solve the problem of classification with imbalanced data. One of them is undersampling with Instance Hardness Threshold (IHT) method. IHT method balances the dataset by eliminating data that are frequently misclassified. To find those data, IHT requires an estimator, which is a classifier. This research aims to compare estimators of IHT method to solve imbalanced data problem in hate speech classification using TF-IDF weighting method. This research uses the class ratio of dataset after undersampling, time of the undersampling process, and Index of Balanced Accuracy (IBA) evaluation to determine the best IHT method. The results of this research show that IHT method using the Logistic Regression (IHT(LR)) has the fastest undersampling process (1.91 s), perfectly balance dataset with the class ratio is 1:1, and has the best of IBA evaluation in all estimation process. This result makes IHT(LR) be the best method to solve the imbalanced data problem in hate speech classification.
Relational into Non-Relational Database Migration with Multiple-Nested Schema Methods on Academic Data Teguh Bharata Adji; Dwi Retno Puspita Sari; Noor Akhmad Setiawan
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 3, No 1 (2019): March 2019
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (965.805 KB) | DOI: 10.22146/ijitee.46503

Abstract

The rapid development of internet technology has increased the need of data storage and processing technology application. One application is to manage academic data records at educational institutions. Along with massive growth of information, decrement in the traditional database performance is inevitable. Hence, there are many companies choose to migrate to NoSQL, a technology that is able to overcome the traditional database shortcomings. However, the existing SQL to NoSQL migration tools have not been able to represent SQL data relations in NoSQL without limiting query performance. In this paper, a relational database transformation system transforming MySQL into non-relational database MongoDB was developed, using the Multiple Nested Schema method for academic databases. The development began with a transformation scheme design. The transformation scheme was then implemented in the migration process, using PDI/Kettle. The testing was carried out on three aspects, namely query response time, data integrity, and storage requirements. The test results showed that the developed system successfully represented the relationship of SQL data in NoSQL, provided complex query performance 13.32 times faster in the migration database, basic query performance involving SQL transaction tables 28.6 times faster on migration results, and basic performance Queries without involving SQL transaction tables were 3.91 times faster in the migration source. This shows that the theory of the Multiple Nested Schema method, aiming to overcome the poor performance of queries involving many JOIN operations, is proved. In addition, the system is also proven to be able to maintain data integrity in all tested queries. The space performance test results indicated that the migrated database transformed using the Multiple Nested Schema method showed a storage requirement of 10.53 times larger than the migration source database. This is due to the large amount of data redundancy resulting from the transformation process. However, at present, storage performance is not a top priority in data processing technology, so large storage requirements are a consequence of obtaining efficient query performance, which is still considered as the first priority in data processing technology.
Design of Web-Based Cashier and Spare Part Warehouse Application Display (Case Study at Surya Motor Shop) Muhammad Esa Permana Putra; Teguh Bharata Adji; Adhistya Erna Permanasari
IJITEE (International Journal of Information Technology and Electrical Engineering) Vol 4, No 2 (2020): June 2020
Publisher : Department of Electrical Engineering and Information Technology,Faculty of Engineering UGM

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22146/ijitee.53512

Abstract

A cashier and spare parts warehouse application is an information system facilitating financial reporting and items inventory systems. This has become a necessity in almost all fields of large and small-scale businesses in every country. The information system that belongs to Surya Motor Shop does not have a display that can facilitate users in operating the company's financial and transaction systems in accordance with company needs. This information system uses Bootstrap with HTML, CSS, and Javascript programming languages. In this paper, an interactive display was developed, so as to be able to accommodate web users' responses, by developing a prototype using Bootstrap at the Surya Motor Shop. This was carried out to digitize the transaction system, making it easier to report the items inventory and financial reporting of the company. The prototype development was developed using the The Elements of User Experience method, a user-centered design process. After developing the prototype, a test was carried out to determine the quality of the user experience. The test employed the User Experience Questionnaire (UEQ) method. UEQ testing shown that the prototype interface developed had a positive level of user experience. Compared with the benchmarks set by UEQ, the test results were above the mean benchmark, except for the pull factor which was still below the benchmark average.