Claim Missing Document
Check
Articles

Pengendalian Kualitas Produk Menggunakan Diagram Kontrol Multivariat p Bayu Iswahyudi Noor; Ika Purnamasari; Fidia Deny Tisna Amijaya
EKSPONENSIAL Vol 10 No 1 (2019)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (470.533 KB)

Abstract

Every company competes with other companies in similar industries. One way to win the competition or at least stay in the competition is to give full attention to the quality of the products, so it can outperform the products produced by a competitors company. Quality control is done at the stage of production process, in order to get the standard or quality as expected. Multivariate p chart is one of the methods used for quality control which is the development of the control chart p. This research is conducted in the newspaper company Kaltim Post, with the characteristics quality of color blur, not symmetrical and dirty. The research is conducted in two phases; phase I is conducted for the period of July 2017 and phase II is conducted for the period of August 2017. The purpose of this research is to know the result of the controlling production of Kaltim Post newspaper by using multivariate p chart , knowing the types of defects that often occur and the cause of the defects types. The result of controlling production of Kaltim Post newspaper using multivariate diagram p is controlled in phase I with upper control limit of 0.002736, center line of 0.0024224 and lower control limit of 0.0021087. So the limits in phase I are appropriate for use in phase II. The most common types of defects are colors blur the caused by machine, method, material, human, and environmental factors.
Penerapan Metode Analisis Regresi Logistik Biner Dan Classification And Regression Tree (CART) Pada Faktor yang Mempengaruhi Lama Masa Studi Mahasiswa Chairunnisa Chairunnisa; Yuki Novia Nasution; Ika Purnamasari
EKSPONENSIAL Vol 8 No 2 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (817.808 KB)

Abstract

Binary Logistic Regression is one of the logistic regression analysis which is used to analyze the relationship between a dichotomous dependent variable with several independent variables. Classification and Regression Tree (CART) is one of the methods that developed to perform classification analysis on dependent variables either on nominal, ordinal, or continuous scale. In this research, Binary Logistic Regression method and Classification and Regression Tree (CART) applied to the data of the students at Faculty of Math and Natural Science Mulawarman University graduated in year 2016 to determine the characteristic of student which is classified according to two categories that is the study period less than or equal to 5 years and study period more than 5 Years, with five independent variables namely GPA Graduates (X1), Gender (X2), Type of Junior School (X3), Domicile (X4), and Major (X5). Factors that influence the study period of the students based on Binary Logistic Regression method are GPA, Gender, Secondary School Type and Major. The result of classification by using CART method is the student who have the study period less than or equal to 5 Years is a student from Chemistry major or have GPA between 3,51 and 4,00, while the study period more than 5 Year is the student who have GPA between 2,00 and 2,75; 2,76 and 3,50. In terms of classification accuracy, Binary Logistic Regression method was able to accurately predict the observation as much as 75.0%, while the CART method was able to accurately predict the observation as much as 77.27%.
Peramalan Jumlah Penduduk Kota Samarinda Dengan Menggunakan Metode Pemulusan Eksponensial Ganda dan Tripel Dari Brown Reyham Nopriadi Gurianto; Ika Purnamasari; Desi Yuniarti
EKSPONENSIAL Vol 7 No 1 (2016)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (183.926 KB)

Abstract

Forecasting is a process or method to predict an event that will occur in the future. Exponential smoothing is a method of moving average forecasting that conduct weighting decreases exponentially toward the value of the older observations. In this study discusses the Brown’s double exponential smoothing and Brown’s triple exponential smoothing method in predicting the population of the city of Samarinda in 2014, 2015 and 2016 are very necessary for the government to determine the population of the city of Samarinda. Double exponential smoothing method and triple from Brown is a method of extrapolation or by using a time series of past history in making the forecast for the future which used as a guide in decision-making processes. Results obtained using the method of Brown's double exponential smoothing using the parameter alpha of 0,52 was obtained that the forecast of total population in 2014 was 843.653 residents, in 2015 was 898.647 residents, and in 2016 was 944.716 residents with an average value deviation absolute (MAD) is 12.937 and the average -rata absolute percentage error (MAPE) is 2,4548. In the triple exponential smoothing method Brown’s using parameters alpha 0,4 obtained results forecast the total population in 2014 was 854.766 residents, in 2015 was 898.647 residents, and in 2016 was 944.716 residents with an average value deviation absolute (MAD) is 14.709 and the average percentage The absolute error (MAPE) is 2,7589.
Optimasi Algoritma Naïve Bayes Menggunakan Algoritma Genetika Untuk Memprediksi Kelulusan Elisa Feronica; Yuki Novia Nasution; Ika Purnamasari
EKSPONENSIAL Vol 13 No 2 (2022)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1106.672 KB) | DOI: 10.30872/eksponensial.v13i2.1057

Abstract

The Naïve Bayes algorithm is classification method that uses the principle of probability to create predictive models. Naïve Bayes is based on the assumption that all its attributes are independent which can be optimized by genetic algorithms. Genetic algorithm is an optimization technique which works by imitating the process of evaluating and changing the genetic structure of living creatures. In this study, the Naive Bayes algorithm was optimized using by genetic algorithm to predict student graduation with attributes, namely gender, regional origin, admission path and employment status. The data used is the students of the Mathematics Department, Faculty of Mathematics and Natural Sciences, Mulawarman University who graduated in March 2018 to December 2020. The results of this study indicate the accuracy value generated by Naïve Bayes of 50% increased by 16,67% after the attributes were optimized by using the genetic algorithm to 66,67% with 3 selected attributes, namely regional origin, admission path and employment status
Peramalan Menggunakan Time Invariant Fuzzy Time Series Siti Rahmah Binaiya; Memi Nor Hayati; Ika Purnamasari
EKSPONENSIAL Vol 10 No 2 (2019)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (632.787 KB)

Abstract

Forecasting is a technique for estimating a value in the future by looking at past and current data. Fuzzy Time Series is a forecasting method that uses fuzzy principles as the basis, where the forecasting process uses the concept of fuzzy set. This study discusses the Time Invariant Fuzzy Time Series method developed by Sah and Degtiarev to forecast the East Kalimantan Province Consumer Price Index (CPI) in May 2018. In the Time Invariant Fuzzy Time Series method using a frequency distribution to determine the length of the interval, 13 fuzzy sets are used in the forecasting process. Based on this study, using CPI data of East Kalimantan Province from September 2016 to April 2018, the forecasting results for May 2018 were obtained 135.977 and obtained the results of forecasting error values using Mean Absolute Percentage Error (MAPE) is under 10% of 0.0949%.
Peramalan Regarima Pada Data Time Series Yudha Muhammad Faishol; Ika Purnamasari; Rito Goejantoro
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (166.802 KB)

Abstract

RegArima method is a modelling technique that combines the ARIMA model with a regression model which uses a dummy variable called regressors or variable regression. The purposes of this study was to determine the calendar variation models and application of the model to predict plane ticket sales in January 2016 - December 2017. Based on the data analysis show that ticket sales have seasonal pattern, ie an increase in ticket sales when Idul Fitri. First determine the regressors which is only affected by one feast day is Eid. Then do the regression model, where the dependent variable (Y) is the volume of plane ticket sales and the independent variable (X) is regressors, so the regression model is Ŷt=1.029+1.335 X. The results of analysis show that all parameters had significant regression model and then do a fit test the model, the obtained residual normal distribution and ineligible white noise, which means that it still contained residual autocorrelation. ARIMA modeling is then performed on the data regression residuals. Results of analysis performed subsequent residual own stationary ARIMA model estimation and obtained ARIMA (0,0,1) with all parameters of the model was already significant and conformance test models had also found and that the residual qualified white noise and residual normal distribution. So the calendar variation model was obtained by the method RegARIMA: Yt = 1.029,5 + 1.337,3 Dt + 0,28712 at-1 + at. Based on the model of those variations could be predicted on plane ticket sales for January 2016-December 2017.
Bootstrap Aggregating Multivariate Adaptive Regression Splines Marisa Nanda Rahmaniah; Yuki Novia Nasution; Ika Purnamasari
EKSPONENSIAL Vol 7 No 2 (2016)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (132.495 KB)

Abstract

MARS is one of the classification methods that focus on the high dimension and discontinuity of the data. The level of accuracy in MARS can be improved by using Bagging method (Bootstrap Agregating). This method is used to improve stability, accuracy and strength’s of prediction. This study discusses the MARS bagging applications in analyzing the issue of accreditation, which the accreditation level of a schools can be predicted based on the identifier components. Therefore, in this study will be identified these components to create a classification model. The data used is the accreditation data of the primary school in East Kalimantan Province 2015 issued by the Accreditation Board of the Provincial Schools (BAP-S/M) of East Kalimantan Province. This study obtained six components that affect the determination of the accreditation of schools at primary school level. The components are the variables that contribute to the classification. The variables are a standard component of content (X1), a standard component of the process (X2), a standard component of graduates (X3), standard components of teachers and staffs (X4), a standard component of infrastructure (X5) and standard component of financial (X7). Based on the result of the classification accuracy of MARS method (using Apparent Error Rate (APER), it is amounted to 78.87%, while the classification accuracy (using APER) with method of bagging of the best MARS models amounted to 89.44%. This means that the method of bagging MARS gives better classification accuracy of the classification than MARS.
Perbandingan Hasil Analisis Cluster Dengan Menggunakan Metode Average Linkage Dan Metode Ward Imasdiani Imasdiani; Ika Purnamasari; Fidia Deny Tisna Amijaya
EKSPONENSIAL Vol 13 No 1 (2022)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (829.126 KB) | DOI: 10.30872/eksponensial.v13i1.875

Abstract

Hierarchical cluster analysis is an analysis used to classify data based on its characteristics. The average linkage method and the Ward method are methods of hierarchical cluster analysis. Grouping data from various aspects, one of which is poverty. This study uses poverty indicator data in East Kalimantan in 2018. The average linkage method is based on the average distance size, while the Ward method is based on the size of the distance between clusters by minimizing the number of squares. The purpose of this study was to determine the best method based on the average value of the standard deviation ratio. The results of the study using the average linkage method obtained two clusters, both the average linkage method and the Ward method both obtained two clusters. Where in the average linkage method, the first cluster consists of 7 districts / cities and the second cluster consists of 3 districts / cities. Whereas in the Ward method, the first cluster consists of 6 districts / cities and the second cluster consists of 4 districts / cities. For the best method based on the average standard deviation ratio in groups (Sw) and the standard deviation between groups (Sb), it is found that the ratio in the Ward method is smaller than the average linkage method, which is 2,681 which indicates that the average linkage method is the best method.
Analisis Pengendalian Kualitas Produk Amplang Menggunakan Peta Kendali Kernel Rahmad Fahreza Adiyasa; Desi Yuniarti; Ika Purnamasari
EKSPONENSIAL Vol 10 No 1 (2019)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (466.794 KB)

Abstract

Quality control is the use of techniques and activities to maintain and improve the quality of products or services. One of the quality control methods is epanevhnikov kernel control chart. The epanevhnikov kernel control chart is a control chart used to evaluate nonparametric product quality characteristic data because it does not require certain assumptions. The purpose of this research is to find out whether the 1 kg packaged Amplang product in UD. H. Icam Samarinda is within the control limit and what factors can cause the weight of the product becomes uncontrollable. The result shows that there is no sample point outside the control limits in the control chart with kernel density function estimation. So it can be concluded that the weight of the product is within a controlled condition. The factors that can cause the products uncontrolable are environmental factors, human factors, machine factors and material factors.
Proses Optimasi Masalah Penugasan One-Objective dan Two-Objective Menggunakan Metode Hungarian Diang Dewi Tamimi; Ika Purnamasari; Wasono Wasono
EKSPONENSIAL Vol 8 No 1 (2017)
Publisher : Program Studi Statistika FMIPA Universitas Mulawarman

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (324.556 KB)

Abstract

Assignment problem is a situation where m workers are assigned to complete n tasks/jobs to minimize costs and time or maximize profits and quality by setting the proper task to each worker. Many researches have been focused to solve assignment problem, but most of them only consider one-objective such as minimizing the cost of operation. Two-objectiveassignment problem is the assignment problem that has two objectives optimization of some of the resources owned by each worker to complete every task/job which are cost and time for this case. Case in this research use primary data drawn from the interviews of Rattan furniture craftman in Rotan Sejati store, Samarinda. This research will optimize the one-objective and two-objective assignment problem by using Hungarian Method. The analysis result revealed that the optimization proccess of one-objective assignment problem only considering operation cost is Rp. 2.950.000,- with total time is 63 days. The optimization proccess of one-objective assignment problem only considering operation time is Rp. 3.290.000,- with total time is 52 days. The optimization proccess of one-objective assignment problem only considering quality is Rp. 3.550.000,- with total time is 59 days. The optimization proccess of two-objective assignment problem only considering operation cost and operation time is Rp. 3.170.000,- with total time is 52 days. The optimization proccess of two-objective assignment problem only considering operation cost and quality is Rp. 3.380.000,- with total time is 61 days. The optimization proccess of two-objective assignment problem only considering operation time and quality is Rp. 3.350.000,- with total time is 59 days.