Claim Missing Document
Check
Articles

Found 8 Documents
Search

Model Matematika COVID-19 dengan Sumber Daya Pengobatan yang Terbatas Rifanti, Utti Marina; Dewi, Atika Ratna; Nurlaili, Nurlaili; Hapsari, Santika Tri
Limits: Journal of Mathematics and Its Applications Vol 18, No 1 (2021)
Publisher : Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/limits.v18i1.8207

Abstract

Coronavirus 2019 (COVID-19) merupakan penyakit menular yang disebabkan oleh Severe Acute Respiratory Syndrome Coronavirus 2. Hingga Desember 2020, terdapat 617 ribu kasus terkonfirmasi positif COVID-19 dengan total 18 ribu kematian karena COVID-19 di Indonesia. Pada penelitian ini, kami menggunakan model kompartemen Susceptible-Exposed-Infected-Recovered (SEIR) untuk analisis dampak sumber daya pengobatan yang terbatas dan memprediksi dinamika penyebaran COVID-19 di Indonesia. Metode yang digunakan adalah penurunan angka rasio reproduksi dasar dan titik ekuilibrium menggunakan analisis sistem dinamik dalam bentuk persamaan diferensial non linier yang diperoleh dari model awal. Kemudian, kami menganalisis angka rasio reproduksi dasar dan titik ekuilibrium, serta memprediksi kondisi pandemi COVID-19 menggunakan kasus nyata di Indonesia sejak 2 Maret hingga 30 Nopember 2020. Dari hasil penelitian ini, diperoleh bahwa jika perubahan kasus terinfeksi  terhadap waktu  kurang dari 2640 kasus, maka angka rasio reproduksi dasar menjadi kurang dari nol dan nilai  semakin mendekati nol saat mulai memasuki bulan Maret 2021. Hal tersebut berarti, jika rata-rata kasus positif terkonfirmasi harian masih di bawah kapasitas maksimal sumber daya pengobatan, yaitu 2640 kasus, maka dari hasil analisis model diprediksikan bahwa penyakit akan mulai menghilang pada bulan Maret 2021. Sebaliknya, jika kasus positif terkonfirmasi harian di atas 2640 kasus, maka diperkirakan penyakit akan mulai menghilang pada Juni 2021.
A COMPARISON OF CLUSTERING BY IMPUTATION AND SPECIAL CLUSTERING ALGORITHMS ON THE REAL INCOMPLETE DATA Ridho Ananda; Atika Ratna Dewi; Nurlaili Nurlaili
Jurnal Ilmu Komputer dan Informasi Vol 13, No 2 (2020): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Information
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v13i2.818

Abstract

The existence of missing values will really inhibit process of clustering. To overcome it, some of scientists have found several solutions. Both of them are imputation and special clustering algorithms. This paper compared the results of clustering by using them in incomplete data. K-means algorithms was utilized in the imputation data. The algorithms used were distribution free multiple imputation (DFMI), Gabriel eigen (GE), expectation maximization-singular value decomposition (EM-SVD), biplot imputation (BI), four algorithms of modified fuzzy c-means (FCM), k-means soft constraints (KSC), distance estimation strategy fuzzy c-means (DESFCM), k-means soft constraints imputed-observed (KSC-IO). The data used were the 2018 environmental performance index (EPI) and the simulation data. The optimal clustering on the 2018 EPI data would be chosen based on Silhouette index, where previously, it had been tested its capability in simulation dataset. The results showed that Silhouette index have the good capability to validate the clustering results in the incomplete dataset and the optimal clustering in the 2018 EPI dataset was obtained by k-means using BI where the silhouette index and time complexity were 0.613 and 0.063 respectively. Based on the results, k-means by using BI is suggested processing clustering analysis in the 2018 EPI dataset.
PENGEMBANGAN APLIKASI GUI MATLAB UNTUK MENAKSIR KOEFISIEN PARAMETER MODEL REGRESI NON LINIER MENGGUNAKAN ALGORITMA LEVENBERG MARQUARDT Atika Ratna Dewi
Wahana Matematika dan Sains: Jurnal Matematika, Sains, dan Pembelajarannya Vol. 14 No. 1 (2020): April 2020
Publisher : Universitas Pendidikan Ganesha

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (471.936 KB) | DOI: 10.23887/wms.v14i1.20901

Abstract

This study aims to make a simple application using the Matlab GUI to estimate parameters in non-linear regression models. The algorithm used in parameter estimation is the Levenberg Marquardt algorithm. The algorithm is written in the MATLAB programming language syntax and is presented in the form of a GUI. The method used in making the application starts from the analysis phase, the interface design stage, the coding stage and the testing phase. The results obtained are the Matlab GUI application that can facilitate the estimation of parameters in the non linear regression model . The parameters chosen are based on the smallest AIC and SC values at t = 0. So the non-linear regression model is obtained, namely . The results of this study can be used as learning media and analytical tools on the subject of parameter assessment in non-linear regression models. 
KLASIFIKASI CITRA X-RAY TORAKS DENGAN MENGGUNAKAN CONTRAST LIMITED ADAPTIVE HISTOGRAM EQUALIZATION DAN CONVOLUTIONAL NEURAL NETWORK (STUDI KASUS: PNEUMONIA) Surya Adi Widiarto; Wahyu Andi Saputra; Atika Ratna Dewi
JIPI (Jurnal Ilmiah Penelitian dan Pembelajaran Informatika) Vol 6, No 2 (2021)
Publisher : STKIP PGRI Tulungagung

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29100/jipi.v6i2.2102

Abstract

Pneumonia merupakan penyakit yang menyerang paru-paru. Ketika seseorang dicurigai sebagai penderita pneumonia maka akan dilakukan berbagai pemeriksaan untuk memastikan hasil diagnosis, salah satunya yaitu pemeriksaan pada citra x-ray toraks. Namun, terdapat kemungkinan dokter/radiologis melakukan kesalahan dalam melakukan interpretasi. Untuk meminimalisir hal tersebut diperlukan terbososan guna membantu dokter/radiologis dalam menganalisis citra x-ray toraks. Salah satunya adalah dengan menerapkan penggunaan Convolutional Neural Network (CNN), dimana harapannya CNN dapat digunakan untuk mengenali citra x-ray toraks sehat dan berpneumonia. Akan tetapi terdapat faktor yang dapat menyebabkan citra x-ray menjadi buruk, sehingga dimungkinkan dapat mempengaruhi hasil perolehan CNN. Untuk mengatasi hal tersebut Contrast Limited Adaptive Histogram Equalization (CLAHE) digunakan untuk melakukan perbaikan citra sebelum citra diterapkan pada CNN. Selain itu penggunaan beberapa epoch dan ukuran gambar yang berbeda juga diterapkan untuk mengetahui pengaruh pada hasil yang diperoleh model, dimana kemudian hasil-hasil yang diperoleh tersebut dilakukan analisis untuk mengetahui model mana yang memperoleh hasil terbaik. Setelah dilakukan pengujian, diperoleh hasil perolehan terbaik pada model dengan penerapan CLAHE pada epoch 180 dengan ukuran 256x256 yang memperoleh tingkat akurasi sebesar 95,21%.
Analisis Data Kecepatan Angin di Pulau Jawa Menggunakan Distribusi Weibull Atika Ratna Dewi; Sri Handini
Jurnal Statistika dan Aplikasinya Vol 6 No 1 (2022): Jurnal Statistika dan Aplikasinya
Publisher : Program Studi Statistika FMIPA UNJ

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21009/JSA.06112

Abstract

Wind is one of the renewable energy products that is environmentally friendly and has a great opportunity as a source of meeting the energy needs of the world's population. In the use of wind as renewable energy, research must be carried out first to determine wind conditions in an area and the method that can be used is the Weibull distribution. In this journal, what will be discussed is the analysis of wind speed data on the island of Java using the Weibull distribution. From the results of the analysis, the wind speed results by province are as follows, Banten Province 2 m/s with a probability of 65%, DKI Jakarta 3 m/s with a probability of 30%, West Java 3 m/s with a probability of 90%, Central Java 8 m/s s with a probability of 11%, East Java 7 m/s with a probability of 7%, DI Yogyakarta is less than 1 m/s with a probability of 98%. The results of the plot of the probability solid function of the Weibull distribution show that the shape of the provincial velocity data curve that corresponds to the Weibull distribution curve is only the curve of East Java Province with a value of 1 < k < 2, namely k = 1.74.
DYNAMIC ANALYSIS OF THE COVID-19 MODEL WITH ISOLATION FACTORS Atika Ratna Dewi; Ridho Ananda; Utti Marina Rifanti
BAREKENG: Jurnal Ilmu Matematika dan Terapan Vol 16 No 1 (2022): BAREKENG: Jurnal Ilmu Matematika dan Terapan
Publisher : PATTIMURA UNIVERSITY

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (753.794 KB) | DOI: 10.30598/barekengvol16iss1pp047-056

Abstract

Covid-19 is a type of infectious disease caused by SARS Cov-2. This virus has spread throughout the world to cause a pandemic. This study aimed to model and analyze the spread of Covid-19 with the isolation factor. The spread of Covid-19 can be made into an epidemic model by taking into account the population of susceptible humans (S), infected humans (I), isolated humans (L), and recovered humans (R). The method used in this research was to derive a non-linear system of differential equations model, complete the model qualitatively, find the primary reproduction ratio ( , see the model’s behavior by analyzing the dynamics of the equilibrium point and make model simulations. This model has a disease-free equilibrium point that is asymptotically stable, while at the equilibrium point, the endemic is unstable. The results of the model simulation and analysis of the value indicate that the chance of successful Covid-19 spread and the isolation factor is a significant controlled parameter in reducing the value.
Komparasi Algoritme C4.5 Dan Naïve Bayes Dalam Klasifikasi Produk Zam–Zam Time Berdasarkan Tingkat Kepuasan Pelanggan Dwi Puspa Martiyaningsih; Rima Dias Ramadhani; Atika Ratna Dewi
Progresif: Jurnal Ilmiah Komputer Vol 19, No 2: Agustus 2023
Publisher : STMIK Banjarbaru

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35889/progresif.v19i2.1226

Abstract

The grouping of Zam-Zam Time products based on the level of customer satisfaction is carried out using the C4.5 and Naïve Bayes classification algorithms. Algorithm classification for Zam-Zam Time products is carried out to find out which products are classified as Best Selling or Less Selling. The purpose of this study is to measure and analyze the best algorithm for handling data on the level of customer satisfaction. Zam-Zam Time is classified as a best seller or not a best seller. The method used in this study was data preprocessing by distributing questionnaires and labeling taken from private or primary data from Zam-Zam Time itself as well as the results of a questionnaire of 400 customer respondents, then a classification analysis process was carried out. The results of the performance of the C4.5 Algorithm in the classification of Zam-Zam Time products are classified as Best Selling or Less Selling, namely with a Training data accuracy value of 98%, computation time of 0.003989458084106445 seconds, Testing data accuracy value of 96%, commutation time of 0.001993417739868164 seconds, with the 8th max_depth, and while Naïve Bayes Data Accuracy Value Training 90% computing time 0.0049860477447509766 seconds, Data Testing 85%, computing time 0.0019948482513427734 seconds.Keywords: Classification; Customer Satisfaction; C4.5 Algorithm; Naïve Bayes; Zam-Zam Time AbstrakPengelompokan produk Zam-Zam Time berdasarkan tingkat kepuasan pelanggan dilakukan menggunakan klasifikasi algoritme C4.5 dan Naïve Bayes. Klasifikasi algoritme pada produk Zam-Zam Time dilakukan untuk mengetahui produk tergolong laris atau kurang laris. Tujuan dari penelitian ini mengukur dan analisis algoritme terbaik dalam menangani data tingkat kepuasan pelanggan Zam-Zam Time tergolong laris atau kurang laris. Metode yang digunakan dalam penelitian ini adalah dilakukan preprocessing data dengan penyebaran kuesioner dan pelabelan yang diambil dari data privat atau primer dari Zam-Zam Time itu sendiri serta hasil kuesioner sebanyak 400 responden pelanggan, kemudian dilakukan proses analisis klasifikasi. Hasil kinerja Algoritme C4.5 dalam klasifikasi produk Zam-Zam Time tergolong Laris atau Kurang Laris yaitu dengan Nilai Akurasi data Training 98%, waktu komputasi 0.003989458084106445 detik, nilai akurasi data Testing 96%, waktu komutasi 0.001993417739868164 detik, dengan max_depth ke-8, sedangkan Naïve Bayes Nilai Akurasi data Training 90% waktu komputasi 0.0049860477447509766 detik, data Testing 85%, waktu komputasi 0.0019948482513427734 detik.Kata Kunci: Kepuasan Pelanggan; Algoritme C4.5; Naïve Bayes; Klasifikasi; Zam-Zam Time
Unsupervised Feature Selection Based on Self-configuration Approaches using Multidimensional Scaling Ridho Ananda; Atika Ratna Dewi; Maifuza Binti Mohd Amin; Miftahul Huda; Gushelmi Gushelmi
Jambura Journal of Mathematics Vol 5, No 2: August 2023
Publisher : Department of Mathematics, Universitas Negeri Gorontalo

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.34312/jjom.v5i2.20397

Abstract

Some researchers often collect features so the principal information does not lose. However, many features sometimes cause problems. The truth of analysis results will decrease because of the irrelevant or repetitive features. To overcome it, one of the solutions is feature selection. They are divided into two, namely supervised and unsupervised learning. In supervised, the feature selection can only be carried out on data containing labels. Meanwhile, in unsupervised, there are three approaches correlation, configuration, and variance. This study proposes an unsupervised feature selection by combining correlation and configuration using multidimensional scaling (MDS). The proposed algorithm is MDS-Clustering, which uses hierarchical and non-hierarchical clustering. The result of MDS-clustering is compared with the existing feature selection. There are three schemes in the comparison process, namely, 75\%, 50\%, and 25\% feature selected. The dataset used in this study is the UCI dataset. The validities used are the goodness-of-fit of the proximity matrix (GoFP) and the accuracy of the classification algorithm. The comparison results show that the feature selection proposed is certainly worth recommending as a new approach in the feature selection process. Besides, on certain data, the algorithm can outperform the existing feature selection.