Claim Missing Document
Check
Articles

Found 13 Documents
Search

DEVELOPING WEB-BASED INFORMATION SYSTEM FOR BOOKING INFLUENCER SERVICES ON CYCLONE MANAGEMENT Frisca Fitria; Genta Sahuri
IT for Society Vol 7, No 1 (2022)
Publisher : President University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33021/itfs.v7i1.4534

Abstract

Cyclone Management is an influencermarketing agency based in Cikarang, Indonesia. CycloneManagement's talent portfolio includes 80+ portfoliosand hundreds of tags from talents who have alreadycollaborated with several big brands on its InstagramFeeds. This final project will discuss establishing a web-based information system for booking influencer servicesin cyclone management. This system will displayinformation about talent to help brands choose the rightinfluencer to promote their products.
MIDI-based generative neural networks with variational autoencoders for innovative music creation Rosalina Rosalina; Genta Sahuri
International Journal of Advances in Applied Sciences Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijaas.v13.i2.pp360-370

Abstract

By utilizing variational autoencoder (VAE) architectures in musical instrument digital interface (MIDI)-based generative neural networks (GNNs), this study explores the field of creative music composition. The study evaluates the success of VAEs in generating musical compositions that exhibit both structural integrity and a resemblance to authentic music. Despite achieving convergence in the latent space, the degree of convergence falls slightly short of initial expectations. This prompts an exploration of contributing factors, with a particular focus on the influence of training data variation. The study acknowledges the optimal performance of VAEs when exposed to diverse training data, emphasizing the importance of sufficient intermediate data between extreme ends. The intricacies of latent space dimensions also come under scrutiny, with challenges arising in creating a smaller latent space due to the complexities of representing data in N dimensions. The neural network tends to position data further apart, and incorporating additional information necessitates exponentially more data. Despite the suboptimal parameters employed in the creation and training process, the study concludes that they are sufficient to yield commendable results, showcasing the promising potential of MIDI-based GNNs with VAEs in pushing the boundaries of innovative music composition.
Generating intelligent agent behaviors in multi-agent game AI using deep reinforcement learning algorithm Rosalina Rosalina; Axel Sengkey; Genta Sahuri; Rila Mandala
International Journal of Advances in Applied Sciences Vol 12, No 4: December 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijaas.v12.i4.pp396-404

Abstract

The utilization of games in training the reinforcement learning (RL) agent is to describe the complex and high-dimensional real-world data. By utilizing games, RL researchers will be able to evade high experimental costs in training an agent to do intelligence tasks. The objective of this research is to generate intelligent agent behaviors in multi-agent game artificial intelligence (AI) using deep reinforcement learning (DRL) algorithm. A basic RL algorithm called deep Q network is chosen to be implemented. The agent is trained by the environment's raw pixel images and the action list information. The experiments conducted by using this algorithm show the agent’s decision-making ability in choosing a favorable action. In the default setting for the algorithm, the training is set into 1 epoch and 0.0025 learning rate. The number of training iterations is set to one as the training function will be repeatedly called for every 4-timestep. However, the author also experimented with two different scenarios in training the agent and compared the results. The experimental findings demonstrate that our agents learn correctly and successfully while actively participating in the game in real time. Additionally, our agent can quickly adjust against a different enemy on a varied map because of the observed knowledge from prior training.