cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 10 Documents
Search results for , issue "Vol 5, No 3 (2019): November 2019" : 10 Documents clear
Improving stroke diagnosis accuracy using hyperparameter optimized deep learning Tessy Badriyah; Dimas Bagus Santoso; Iwan Syarif; Daisy Rahmania Syarif
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.427

Abstract

Stroke may cause death for anyone, including youngsters. One of the early stroke detection techniques is a Computerized Tomography (CT) scan. This research aimed to optimize hyperparameter in Deep Learning, Random Search and Bayesian Optimization for determining the right hyperparameter. The CT scan images were processed by scaling, grayscale, smoothing, thresholding, and morphological operation. Then, the images feature was extracted by the Gray Level Co-occurrence Matrix (GLCM). This research was performed a feature selection to select relevant features for reducing computing expenses, while deep learning based on hyperparameter setting was used to the data classification process. The experiment results showed that the Random Search had the best accuracy, while Bayesian Optimization excelled in optimization time.
Implementation of hyyrö’s bit-vector algorithm using advanced vector extensions 2 Kyle Matthew Chan Chua; Janz Aeinstein Fauni Villamayor; Lorenzo Campos Bautista; Roger Luis Uy
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.362

Abstract

The Advanced Vector Extensions 2 (AVX2) instruction set architecture was introduced by Intel’s Haswell microarchitecture that features improved processing power, wider vector registers, and a rich instruction set. This study presents an implementation of the Hyyrö’s bit-vector algorithm for pairwise Deoxyribonucleic Acid (DNA) sequence alignment that takes advantage of Single-Instruction-Multiple-Data (SIMD) computing capabilities of AVX2 on modern processors. It investigated the effects of the length of the query and reference sequences to the I/O load time, computation time, and memory consumption. The result reveals that the experiment has achieved an I/O load time of ϴ(n), computation time of ϴ(n*⌈m/64⌉), and memory consumption of ϴ(n). The implementation computed more extended time complexity than the expected ϴ(n) due to instructional and architectural limitations. Nonetheless, it was par with other experiments, in terms of computation time complexity and memory consumption.
Improving learning vector quantization using data reduction Pande Nyoman Ariyuda Semadi; Reza Pulungan
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.330

Abstract

Learning Vector Quantization (LVQ) is a supervised learning algorithm commonly used for statistical classification and pattern recognition. The competitive layer in LVQ studies the input vectors and classifies them into the correct classes. The amount of data involved in the learning process can be reduced by using data reduction methods. In this paper, we propose a data reduction method that uses geometrical proximity of the data. The basic idea is to drop sets of data that have many similarities and keep one representation for each set. By certain adjustments, the data reduction methods can decrease the amount of data involved in the learning process while still maintain the existing accuracy. The amount of data involved in the learning process can be reduced down to 33.22% for the abalone dataset and 55.02% for the bank marketing dataset, respectively.
Optimized biometric system based iris-signature for human identification Muthana Hachim Hamd
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.407

Abstract

This research aimed at comparing iris-signature techniques, namely the Sequential Technique (ST) and the Standard Deviation Technique (SDT). Both techniques were measured by Backpropagation (BP), Probabilistic, Radial basis function (RBF), and Euclidian distance (ED) classifiers. A biometric system-based iris is developed to identify 30 of CASIA-v1 and 10 subjects from the Real-iris datasets. Then, the proposed unimodal system uses Fourier descriptors to extract the iris features and represent them as an iris-signature graph. The 150 values of input machine vector were optimized to include only high-frequency coefficients of the iris-signature, then the two optimization techniques are applied and compared. The first optimization (ST) selects sequentially new feature values with different lengths from the enrichment graph region that has rapid frequency changes. The second technique (SDT) chooses the high variance coefficients as a new feature of vectors based on the standard deviation formula. The results show that SDT achieved better recognition performance with the lowest vector-lengths, while Probabilistic and BP have the best accuracy.
Anomaly detection on flight route using similarity and grouping approach based-on automatic dependent surveillance-broadcast Mohammad Yazdi Pusadan; Joko Lianto Buliali; Raden Venantius Hari Ginardi
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.232

Abstract

Flight anomaly detection is used to determine the abnormal state data on the flight route. This study focused on two groups: general aviation habits (C1)and anomalies (C2). Groups C1 and C2 are obtained through similarity test with references. The methods used are: 1) normalizing the training data form, 2) forming the training segment 3) calculating the log-likelihood value and determining the maximum log-likelihood (C1) and minimum log-likelihood (C2) values, 4) determining the percentage of data based on criteria C1 and C2 by grouping SVM, KNN, and K-means and 5) Testing with log-likelihood ratio. The results achieved in each segment are Log-likelihood value in C1Latitude is -15.97 and C1Longitude is -16.97. On the other hand, Log-likelihood value in C2Latitude is -19.3 (maximum) and -20.3 (minimum), and log-likelihood value in C2Longitude is -21.2 (maximum) and -24.8 (minimum). The largest percentage value in C1 is 96%, while the largest in C2 is 10%. Thus, the highest potential anomaly data is 10%, and the smallest is 3%. Also, there are performance tests based on F-measure to get accuracy and precision.
Internal and collective interpretation for improving human interpretability of multi-layered neural networks Ryotaro Kamimura
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.420

Abstract

The present paper aims to propose a new type of information-theoretic method to interpret the inference mechanism of neural networks. We interpret the internal inference mechanism for itself without any external methods such as symbolic or fuzzy rules. In addition, we make interpretation processes as stable as possible. This means that we interpret the inference mechanism, considering all internal representations, created by those different conditions and patterns. To make the internal interpretation possible, we try to compress multi-layered neural networks into the simplest ones without hidden layers. Then, the natural information loss in the process of compression is complemented by the introduction of a mutual information augmentation component. The method was applied to two data sets, namely, the glass data set and the pregnancy data set. In both data sets, information augmentation and compression methods could improve generalization performance. In addition, compressed or collective weights from the multi-layered networks tended to produce weights, ironically, similar to the linear correlation coefficients between inputs and targets, while the conventional methods such as the logistic regression analysis failed to do so.
Optimization of data resampling through GA for the classification of imbalanced datasets Filippo Galli; Marco Vannucci; Valentina Colla
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.409

Abstract

Classification of imbalanced datasets is a critical problem in numerous contexts. In these applications, standard methods are not able to satisfactorily detect rare patterns due to multiple factors that bias the classifiers toward the frequent class. This paper overview a novel family of methods for the resampling of an imbalanced dataset in order to maximize the performance of arbitrary data-driven classifiers. The presented approaches exploit genetic algorithms (GA) for the optimization of the data selection process according to a set of criteria that assess each candidate sample suitability. A comparison among the presented techniques on a set of industrial and literature datasets put into evidence the validity of this family of approaches, which is able not only to improve the performance of a standard classifier but also to determine the optimal resampling rate automatically. Future activities for the improvement of the proposed approach will include the development of new criteria for the assessment of sample suitability.
Nonstandard optimal control problem: case study in an economical application of royalty problem Wan Noor Afifah Wan Ahmad; Suliadi Firdaus Sufahani; Alan Zinober; Azila M Sudin; Muhaimin Ismoen; Norafiz Maselan; Naufal Ishartono
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.357

Abstract

This paper's focal point is on the nonstandard Optimal Control (OC) problem. In this matter, the value of the final state variable, y(T) is said to be unknown. Moreover, the Lagrangian integrand in the function is in the form of a piecewise constant integrand function of the unknown state value y(T). In addition, the Lagrangian integrand depends on the y(T) value. Thus, this case is considered as the nonstandard OC problem where the problem cannot be resolved by using Pontryagin’s Minimum Principle along with the normal boundary conditions at the final time in the classical setting. Furthermore, the free final state value, y(T) in the nonstandard OC problem yields a necessary boundary condition of final costate value, p(T) which is not equal to zero. Therefore, the new necessary condition of final state value, y(T) should be equal to a certain continuous integral function of y(T)=z since the integrand is a component of y(T). In this study, the 3-stage piecewise constant integrand system will be approximated by utilizing the continuous approximation of the hyperbolic tangent (tanh) procedure. This paper presents the solution by using the computer software of C++ programming and AMPL program language. The Two-Point Boundary Value Problem will be solved by applying the indirect method which will involve the shooting method where it is a combination of the Newton and the minimization algorithm (Golden Section Search and Brent methods). Finally, the results will be compared with the direct methods (Euler, Runge-Kutta, Trapezoidal and Hermite-Simpson approximations) as a validation process.
Radial greed algorithm with rectified chromaticity for anchorless region proposal applied in aerial surveillance Anton Louise Pernez De Ocampo; Elmer Dadios
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.426

Abstract

In aerial images, human figures are often rendered at low resolution and in relatively small sizes compared to other objects in the scene, or resemble likelihood to other non-human objects. The localization of trust regions for possible containment of the human figure becomes difficult and computationally exhaustive. The objective of this work is to develop an anchorless region proposal which can emphasize potential persons from other objects and the vegetative background in aerial images. Samples are taken from different angles, altitudes and environmental factors such as illumination. The original image is rendered in rectified color space to create a pseudo-segmented version where objects of close chromaticity are combined. The geometric features of segments formed are then calculated and subjected to Radial-Greed Algorithm where segments resembling human figures are selected as the proposed regions for classification. The proposed method achieved 96.76% less computational cost against brute sliding window method and hit rate of 95.96%. In addition, the proposed method achieved 98.32 % confidence level that it can hit target proposals at least 92% every time.
Japanese sign language classification based on gathered images and neural networks Shin-ichi Ito; Momoyo Ito; Minoru Fukumi
International Journal of Advances in Intelligent Informatics Vol 5, No 3 (2019): November 2019
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v5i3.406

Abstract

This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words.

Page 1 of 1 | Total Record : 10