Nurnajmin Qasrina Ann
Universiti Malaysia Pahang

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Parameter Prediction for Lorenz Attractor by using Deep Neural Network Nurnajmin Qasrina Ann; Dwi Pebrianti; Mohammad Fadhil Abas; Luhur Bayuaji; Mohammad Syafrullah
Indonesian Journal of Electrical Engineering and Informatics (IJEEI) Vol 8, No 3: September 2020
Publisher : IAES Indonesian Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52549/ijeei.v8i3.1272

Abstract

Nowadays, most modern deep learning models are based on artificial neural networks. This research presents Deep Neural Network to learn the database, which consists of high precision, a strange Lorenz attractor. Lorenz system is one of the simple chaotic systems, which is a nonlinear and characterized by an unstable dynamic behavior. The research aims to predict the parameter of a strange Lorenz attractor either yes or not. The primary method implemented in this paper is the Deep Neural Network by using Phyton Keras library. For the neural network, the different number of hidden layers are used to compare the accuracy of the system prediction. A set of data is used as the input of the neural network, while for the output part, the accuracy of prediction data is expected. As a result, the accuracy of the testing result shows that 100% correct prediction can be achieved when using the training data. Meanwhile, only 60% correct prediction is achieved for the new random data.
Automated-tuned hyper-parameter deep neural network by using arithmetic optimization algorithm for Lorenz chaotic system Nurnajmin Qasrina Ann; Dwi Pebrianti; Mohd Fadhil Abas; Luhur Bayuaji
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 2: April 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i2.pp2167-2176

Abstract

Deep neural networks (DNNs) are very dependent on their parameterization and require experts to determine which method to implement and modify the hyper-parameters value. This study proposes an automated-tuned hyper-parameter for DNN using a metaheuristic optimization algorithm, arithmetic optimization algorithm (AOA). AOA makes use of the distribution properties of mathematics’ primary arithmetic operators, including multiplication, division, addition, and subtraction. AOA is mathematically modeled and implemented to optimize processes across a broad range of search spaces. The performance of AOA is evaluated against 29 benchmark functions, and several real-world engineering design problems are to demonstrate AOA’s applicability. The hyper-parameter tuning framework consists of a set of Lorenz chaotic system datasets, hybrid DNN architecture, and AOA that works automatically. As a result, AOA produced the highest accuracy in the test dataset with a combination of optimized hyper-parameters for DNN architecture. The boxplot analysis also produced the ten AOA particles that are the most accurately chosen. Hence, AOA with ten particles had the smallest size of boxplot for all hyper-parameters, which concluded the best solution. In particular, the result for the proposed system is outperformed compared to the architecture tested with particle swarm optimization.