cover
Contact Name
Yuliah Qotimah
Contact Email
yuliah@lppm.itb.ac.id
Phone
+622286010080
Journal Mail Official
jictra@lppm.itb.ac.id
Editorial Address
LPPM - ITB Center for Research and Community Services (CRCS) Building Floor 6th Jl. Ganesha No. 10 Bandung 40132, Indonesia Telp. +62-22-86010080 Fax. +62-22-86010051
Location
Kota bandung,
Jawa barat
INDONESIA
Journal of ICT Research and Applications
ISSN : 23375787     EISSN : 23385499     DOI : https://doi.org/10.5614/itbj.ict.res.appl.
Core Subject : Science,
Journal of ICT Research and Applications welcomes full research articles in the area of Information and Communication Technology from the following subject areas: Information Theory, Signal Processing, Electronics, Computer Network, Telecommunication, Wireless & Mobile Computing, Internet Technology, Multimedia, Software Engineering, Computer Science, Information System and Knowledge Management.
Articles 7 Documents
Search results for , issue "Vol. 10 No. 2 (2016)" : 7 Documents clear
Tweet-based Target Market Classification Using Ensemble Method Muhammad Adi Khairul Anshary; Bambang Riyanto Trilaksono
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.3

Abstract

Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting). To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree) algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras) and three categories of sentiments (positive, negative and neutral) were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%). On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.
Efficient CFO Compensation Method in Uplink OFDMA for Mobile WiMax Lakshmanan Muthukaruppan; Parthasharathi Mallick; Nithyanandan Lakshmanan; Sai Krishna Marepalli
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.4

Abstract

Mobile WiMax uses Orthogonal Frequency Division Multiple Access (OFDMA) in uplink where synchronization is a complex task as each user presents a different carrier frequency offset (CFO). In the Data Aided Phase Incremental Technique (DA-PIT) estimation is performed after FFT operation to use the received frequency domain pilot subcarrier information. As estimation is done in the presence of noise, there exists some offset error, which is called residual frequency offset (RFO). The Simple Time Domain Multi User Interference Cancellation scheme (SI-MUIC) is a time domain approach which takes a longer time delay to compensate the CFO effect for the last user. Decorrelation-Successive Interference Cancellation (DC-SC) and Integrated Estimation and Compensation (IEC) are frequency domain approaches that compensate the CFO effect with a more complex method for ICI cancellation. The Modified Integrated Estimation and Compensation technique (Modified IEC) is proposed for better residual CFO compensation. The proposed technique has better performance due to its efficient suppression of ICI and MUI. The difference between the CFOs of two OFDMA symbols lies within the range of RFO that is not considered in the conventional compensation techniques, such as the SI-MUIC, DC-SC and IEC compensation techniques.
High Performance CDR Processing with MapReduce Mulya Agung; Achmad Imam Kistijantoro
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.1

Abstract

A call detail record (CDR) is a data record produced by telecommunication equipment consisting of call detail transaction logs. It contains valuable information for many purposes in several domains, such as billing, fraud detection and analytical purposes. However, in the real world these needs face a big data challenge. Billions of CDRs are generated every day and the processing systems are expected to deliver results in a timely manner. The capacity of our current production system is not enough to meet these needs. Therefore a better performing system based on MapReduce and running on Hadoop cluster was designed and implemented. This paper presents an analysis of the previous system and the design and implementation of the new system, called MS2. In this paper also empirical evidence is provided to demonstrate the efficiency and linearity of MS2. Tests have shown that MS2 reduces overhead by 44% and speeds up performance nearly twice compared to the previous system. From benchmarking with several related technologies in large-scale data processing, MS2 was also shown to perform better in the case of CDR batch processing.  When it runs on a cluster consisting of eight CPU cores and two conventional disks, MS2 is able to process 67,000 CDRs/second.
Social Media Text Classification by Enhancing Well-Formed Text Trained Model Phat Jotikabukkana; Virach Sornlertlamvanich; Okumura Manabu; Choochart Haruechaiyasak
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.6

Abstract

Social media are a powerful communication tool in our era of digital information. The large amount of user-generated data is a useful novel source of data, even though it is not easy to extract the treasures from this vast and noisy trove. Since classification is an important part of text mining, many techniques have been proposed to classify this kind of information. We developed an effective technique of social media text classification by semi-supervised learning utilizing an online news source consisting of well-formed text. The computer first automatically extracts news categories, well-categorized by publishers, as classes for topic classification. A bag of words taken from news articles provides the initial keywords related to their category in the form of word vectors. The principal task is to retrieve a set of new productive keywords. Term Frequency-Inverse Document Frequency weighting (TF-IDF) and Word Article Matrix (WAM) are used as main methods. A modification of WAM is recomputed until it becomes the most effective model for social media text classification. The key success factor was enhancing our model with effective keywords from social media. A promising result of 99.50% accuracy was achieved, with more than 98.5% of Precision, Recall, and F-measure after updating the model three times.
Mining High Utility Itemsets with Regular Occurrence Komate Amphawan; Philippe Lenca; Anuchit Jitpattanakul; Athasit Surarerks
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.5

Abstract

High utility itemset mining (HUIM) plays an important role in the data mining community and in a wide range of applications. For example, in retail business it is used for finding sets of sold products that give high profit, low cost, etc. These itemsets can help improve marketing strategies, make promotions/ advertisements, etc. However, since HUIM only considers utility values of items/itemsets, it may not be sufficient to observe product-buying behavior of customers such as information related to "regular purchases of sets of products having a high profit margin". To address this issue, the occurrence behavior of itemsets (in the term of regularity) simultaneously with their utility values was investigated. Then, the problem of mining high utility itemsets with regular occurrence (MHUIR) to find sets of co-occurrence items with high utility values and regular occurrence in a database was considered. An efficient single-pass algorithm, called MHUIRA, was introduced. A new modified utility-list structure, called NUL, was designed to efficiently maintain utility values and occurrence information and to increase the efficiency of computing the utility of itemsets. Experimental studies on real and synthetic datasets and complexity analyses are provided to show the efficiency of MHUIRA combined with NUL in terms of time and space usage for mining interesting itemsets based on regularity and utility constraints.
An Application of PSV-S in Fast Development of a Real-Time DSP System Armein Z.R. Langi
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2016.10.2.2

Abstract

Virtual prototyping is natural in developing digital signal processing (DSP) systems using a product-service-value system (PSV-S) approach. Our DSP virtual prototyping approach consists of four development phases: (1) a generic DSP system, (2) a functional DSP system, (3) an architectural DSP system, and (4) a real-time DSP system. Such an approach results in a more comprehensive approach in the DSP system development. This paper shows an example of prototyping a voice codec on a single-chip DSP processor.
Cover JICTRA Vol. 10 No. 2, 2016 Journal of ICT Research and Applications
Journal of ICT Research and Applications Vol. 10 No. 2 (2016)
Publisher : LPPM ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Page 1 of 1 | Total Record : 7