In recent years, the application of deep learning-based financial modeling tools has grown in popularity. Research on stock forecasting is crucial to understanding how a nation's economy is doing. The study of intrinsic value and stock market forecasting has significant theoretical implications and a broad range of potential applications. One of the trickiest challenges in projects involving deep learning and machine learning is hyperparameter search. In this paper, we evaluate and analyze the optimal hyperparameter search in the long short-term memory (LSTM) model developed to forecast stock prices using the Optuna framework. We examined a number of hyperparameters with several LSTM architectures, including optimizers (SGD, Adagrad, RMSprop, Nadam, Adamax, dan Adam), LSTM hidden units, dropout rates, epochs, batch size, and learning rate. The results of the experiment indicated that of the four LSTM models tested—model 1 single LSTM, model 2 single LSTM, model 1 LSTM stacked, and model 2 LSTM stacked—model 1 single LSTM was the most effective. Single LSTM version 1 offers the lowest losses when compared to other models and had the lowest root mean square error (RMSE) score of 7.21. When compared to manual hyperparameter tuning, automatic hyperparameter tuning has lower losses and is better.
Copyrights © 2023