Claim Missing Document
Check
Articles

Found 21 Documents
Search

PENERAPAN METODE FUZZY WEIGHTED PRODUCT (WP) DENGAN PEMBOBOTAN ENTROPY Dwi Ispriyanti; Azizah Mulia Mawarni; Alan Prahutama; Tarno Tarno
Jurnal Statistika Universitas Muhammadiyah Semarang Vol 8, No 1 (2020): Jurnal Statistika Universitas Muhammadiyah Semarang
Publisher : Department Statistics, Faculty Mathematics and Natural Science, UNIMUS

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (316.425 KB) | DOI: 10.26714/jsunimus.8.1.2020.%p

Abstract

The government, through the Directorate General of Higher Education, Ministry of National Education seeks to allocate funds to provide scholarships to students who are economically unable to finance their education, and provide scholarships to students who have achievements. The provision of learning assistance in the form of scholarships was given to students in various universities including Diponegoro University. Scholarships awarded include Academic Achievement Achievement scholarships (PPAs) awarded to outstanding students and scholarships Student Learning Assistance (BBM) given to underprivileged students. In recruiting prospective PPA scholarship recipients, the selection committee applies several assessment criteria. The required assessment criteria are the GPA value, the parent's income, the championship achievement, the semester, the number of dependents, and the electric power. The PPA scholarship selection system has not been effective even though it has been with the help of a computer. So there is a need for decision-making methods in assisting selection. The method applied in selecting scholarship recipients is WP, with Entropy weighting method. Previously, the criteria value was changed to fuzzy numbers. Fuzzy Weighted Product (WP) method successfully selected PPA scholarship recipients with optimal results to help screening committee.
ANALISIS DATA RUNTUN WAKTU MENGGUNAKAN METODE WAVELET THRESHOLDING Yudi Ari Wibowo; Suparti Suparti; Tarno Tarno
Jurnal Gaussian Vol 1, No 1 (2012): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (678.49 KB) | DOI: 10.14710/j.gauss.v1i1.918

Abstract

Latterly, wavelet is used in various application of statistics. Wavelet is a method without parameter which used in signal analysis, data compression, and time series analysis. Wavelet thresholding is a method which reconstructing the largest number of wavelet coefficients. Only the coefficients are greater than a specified value which taken and the rest coefficients are ignored, because considered null. Certain value is called the threshold value. The level of smoothness estimation are determined by some factor such as wavelet functions, the type of thresholding functions, level of resolutions and threshold parameters. But most dominant factor is threshold parameter. Because that was required to select the optimal threshold value. At the simulation study was analyzing of the stasioner, nonstasioner and nonlinier data. Wavelet thresholding method gives the value of Mean Square Error (MSE) which is smaller than the ARIMA. Wavelet thresholding is considered quite so well in the analysis of time series data.
PEMODELAN LAJU INFLASI DI PROVINSI JAWA TENGAH MENGGUNAKAN REGRESI DATA PANEL Dody Apriliawan; Tarno Tarno; Hasbi Yasin
Jurnal Gaussian Vol 2, No 4 (2013): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (465.378 KB) | DOI: 10.14710/j.gauss.v2i4.3791

Abstract

Panel regression is a regression which is a combination of cross section and time series. To estimate the panel regression there are 3 approaches, the common effect model (CEM), the fixed effect model (FEM) and the random effect model (REM). In the CEM, the parameters were estimated using the Ordinary Least Square (OLS). In the FEM, the parameters estimated by OLS through the addition of dummy variables. At REM, error is assumed random and estimated by the method of Generalized Least Square (GLS). This study aims to analyze the factors that influence inflation in the Central Java province using panel regression. Based on test result of panel regression, the appropriate model is the CEM. The parameters of model are estimated by using OLS the cross section weights. The model show that the Consumer Price Index (CPI), Minimum Salary of City/Regency (MSCR) and the economic growth significantly effect on percentage of inflation in Central Java Province.
PEMODELAN REGRESI SPLINE TRUNCATED UNTUK DATA LONGITUDINAL ( Studi Kasus : Harga Saham Bulanan pada Kelompok Saham Perbankan Periode Januari 2009 – Desember 2015 ) Khoirunnisa Nur Fadhilah; Suparti Suparti; Tarno Tarno
Jurnal Gaussian Vol 5, No 3 (2016): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (700.706 KB) | DOI: 10.14710/j.gauss.v5i3.14699

Abstract

Stocks are securities that can be bought and sold by individuals or institutions as a sign of ownership of any person nor bussines entity within a company. From the value of market capitalization, the stock is divided into 3 groups: large capitalization (big-cap), medium capitalization (mid-cap), and small capitalization (small-cap). The stocks has been fluctuated up and down because of several factors, one of them is inflation. Longitudinal data are observations made of n subjects that mutually independent with each subject which observed repeatedly in different period of time mutually dependent. Modelling longitudinal data of stock prices do with truncated spline nonparametric regression approach. The best model of spline depends on the determination of the optimal knot points which has minimum value of Generalized Cross Validation (GCV). The best of truncated spline regression is spline order 2 with 3 knot points for each of the subjects on longitudinal data. By using the model, the value of MAPE for each subject is 29,93% for PT Bank Mandiri (Persero) Tbk., 16,67% for PT Bank Bukopin Tbk., and 12,99% for PT Bank Bumi Arta Tbk.. Keywords: stocks, longitudinal data, truncated spline, GCV
PEMODELAN SEASONAL GENERALIZED SPACE TIME AUTOREGRESSIVE (SGSTAR) (Studi Kasus: Produksi Padi di Kabupaten Demak, Kabupaten Boyolali, dan Kabupaten Grobogan) Aisha Shaliha Mansoer; Tarno Tarno; Yuciana Wilandari
Jurnal Gaussian Vol 5, No 4 (2016): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (600.045 KB) | DOI: 10.14710/j.gauss.v5i4.14716

Abstract

Generalized Space Time Autoregressive (GSTAR) model is more flexible as a generalization of Space Time Autoregressive (STAR) model which be able to express the linear relationship of time and location. The purpose of this study is to construct GSTAR model for forecasting the rice plant production in the three districts of Central Java. The data which used to contruct the model is quarterly data of rice plant production in Demak, Boyolali and Grobogan from 1987 through 2014. According to the empirical study result using GSTAR model with uniform weight, binary weight, inverse distance wight, and normalized cross correlation weight, GSTAR (31)-I(1)3 with uniform weight is the optimal model. The model shows that every location is influenced by the location itself. Keywords :  GSTAR, Space Time, uniform weight
PERHITUNGAN VALUE AT RISK PADA PORTOFOLIO SAHAM MENGGUNAKAN COPULA (Studi Kasus : Saham- Saham Perusahaan di Indonesia Periode 13 Oktober 2011 - 12 Oktober 2016) Oktafiani Widya Ningrum; Tarno Tarno; Di Asih I Maruddani
Jurnal Gaussian Vol 6, No 2 (2017): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (850.506 KB) | DOI: 10.14710/j.gauss.v6i2.16952

Abstract

Investment is one of the way that is widely performed by people to achieve profitability in the future.Stock data is a data that is obtained from the observation that stock prices can be categorized into time series data, which usually have a tendency to fluctuate rapidly by the time so the variance of the residual will always change all the time or not constant, or often called heteroscedasticity case.  Forecasting and data analysis is intended to minimize the risk and uncertainty factors. The risks can not be avoided but can be managed and estimated using Value at Risk (VaR) measurement tool. Copula theory is one of the tool that can be used to fit the joint distribution because it does not require the assumption of normality of the data so it is flexible enough for a variety of data, especially for financial data. This research is conducted using the method of Copula-GARCH to fit the three stocks of companies return data in Indonesia which have high volatility, those are PT Vale Indonesia Tbk (INCO), PT Bank Central Asia Tbk (BCA), and PT Indocement Tunggal Tbk (INTP) in period of October 13, 2011 to October 12, 2016 into ARIMA-GARCH model. The analysis is followed by copula on two stocks that have the highest ARIMA-GARCH residual correlation, those are BCA and INTP.Copula Gumbel is selected as the best copula with the amount of  is 1,337. The risk derived from the calculation of Value at Risk (VaR) at the 99% confidence level is 3,922%, at the 95% confidence level is 2,397%, and at the 90% confidence level is 1,745%.Keywords : Value at Risk, Copula, GARCH
KLASIFIKASI CALON PENDONOR DARAH MENGGUNAKAN METODE NAÏVE BAYES CLASSIFIER (Studi Kasus : Calon Pendonor Darah di Kota Semarang) Dhimas Bayususetyo; Rukun Santoso; Tarno Tarno
Jurnal Gaussian Vol 6, No 2 (2017): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (602 KB) | DOI: 10.14710/j.gauss.v6i2.16948

Abstract

Classification is the process of finding a model or function that describes and distinguishes data classes or concepts, for the purpose of being able to use the model to predict the class of objects whose class label is unknown. There are some methods that are included in the classification methods, one of them is Naïve Bayes. Naïve Bayes is a prediction technique that based simple probabilistic are based on the application of Bayes theorem with strong independence assumption. On this study carried out correction to the Naïve Bayes method in calculating the conditional probability of each feature using two approaches,  normal density function and cumulative distribution function approaches. These two approaches are used to classify prospective blood donors in Semarang City. The predictor variables used are hemoglobin level, upper blood pressure, lower blood pressure, and weight. The result of this study shows that both approaches have the same Matthews Correlation Coefficient (MCC) values, 0.8985841 or close to +1. It means that both approaches equally well doing classification.Keywords: Classification, Naïve Bayes, Normal Density Function, Cumulative Distribution Function, Blood Donors, Matthews Correlation Coefficient (MCC).
PERAMALAN INDEKS HARGA KONSUMEN 4 KOTA DI JAWA TENGAH MENGGUNAKAN MODEL GENERALIZED SPACE TIME AUTOREGRESSIVE (GSTAR) Lina Irawati; Tarno Tarno; Hasbi Yasin
Jurnal Gaussian Vol 4, No 3 (2015): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (666.462 KB) | DOI: 10.14710/j.gauss.v4i3.9479

Abstract

Generalized Space Time Autoregressive (GSTAR) models are generalization of the Space Time Autoregressive (STAR) models which has the data characteristics of time series and location linkages (space-time). GSTAR is more flexible when faced with the locations that have heterogeneous characteristics. The purposes of this research are to get the best GSTAR model and the forecasting results of Consumer Price Index (CPI) data in Purwokerto, Solo, Semarang and Tegal. The best model obtained is GSTAR (11) I(1) using cross correlation normalization weight because it generated white noise and multivariate normal residuals with average value of MAPE 3,93% and RMSE 10,02. The best GSTAR model explained that CPI of Purwokerto is only affected by times before, it does not affect to other cities but can be affecting to other cities. Otherwise, CPI of Surakarta, Semarang and Tegal are affecting each others. Keywords: GSTAR, Space Time, Consumer Price Index, MAPE, RMSE
PERHITUNGAN VALUE AT RISK MENGGUNAKAN MODEL INTEGRATED GENERALIZED AUTOREGRESSIVE CONDITIONAL HETEROSCEDASTICITY (IGARCH) (Studi Kasus pada Return Kurs Rupiah terhadap Dollar Australia) Dian Febriana; Tarno Tarno; Sugito Sugito
Jurnal Gaussian Vol 3, No 4 (2014): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (416.867 KB) | DOI: 10.14710/j.gauss.v3i4.8074

Abstract

Foreign exchange trading can be an alternative investment due to the rapid movement of the exchange rate and its liquid characteristic. Measurement of risk is important because investment is related to substantial funds. One of the popular methods of risk measurement is Value at Risk (VaR) method. In financial time series, data usually have a variance that is not constant (heteroscedastisity). To overcome these problems, ARCH and GARCH models are used. One type of ARCH / GARCH namely Integrated Generalized Autoregressive Conditional Heteroscedasticity (IGARCH). The purpose of this study is modeling the IGARCH volatility and to calculate VaR based on the estimate volatility of the  exchange rate return data rupiah against the Australian dollar. This study use daily selling rate data of the rupiah against the Australian dollar from 1 June 2012 until February 28, 2014. The best IGARCH model used for forecasting volatility of exchange rate return data Rupiah against the Australian dollar is the ARIMA model ([10], 0, [19]) IGARCH (1,1) because it has the smallest AIC value. The estimation volatility forecasting results obtained from the IGARCH (1,1) is used to calculate the value at risk on 5 periods ahead with one day holding period and a confidence level of 95%. Value at Risk to be around 0.95% to 1.07% with the highest VaR on 3rd March 2014 and the lowest VaR on 7th March 2014. Keywords : Exchange rate, Volatility, Integrated  Generalized Autoregressive Conditional Heteroscedasticity (IGARCH), Value at Risk (VaR)
ANALISIS INTERVENSI DAN DETEKSI OUTLIER PADA DATA WISATAWAN DOMESTIK (Studi Kasus di Daerah Istimewa Yogyakarta) Lenny Budiarti; Tarno Tarno; Budi Warsito
Jurnal Gaussian Vol 2, No 1 (2013): Jurnal Gaussian
Publisher : Department of Statistics, Faculty of Science and Mathematics, Universitas Diponegoro

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (620.607 KB) | DOI: 10.14710/j.gauss.v2i1.2742

Abstract

The tourist data is very interesting to be studied because the Indonesian tourism sector is an activator of the national economic which is potential to push higher economic growth in the future. Therefore, the forecast about tourist data is very needed for tourism business. The tourist data tend to fluctuate caused by many factors that affect the number of tourists extremely in an area, such as disasters, government regulation, social stability, violence and terrorism. That the extreme data can be assessed using intervention analysis and outlier detection. Intervention model is a time series model that can be used to forecast data consist of intervention of internal and external factors. In the intervention model, there are two kinds of intervention function, i.e., step and pulse functions. Step function is a form of intervention occurred in period of time while the pulse function is a form of intervention occurred only in a certain time. For the outlier detection, there are four types, such as additive outlier (AO), innovational outlier (IO), level shift (LS) and temporary change (TC). As an empirical studies was conducted by the domestic tourists data in Yogyakarta from January 2006 until December 2010 who staying on five-star hotels and motel throughout Yogyakarta. Based on the result of this research, known that the intervention occurred on January 2010 using the pulse function with MSE value 1172. Meanwhile based on the outliers detection, known any five outliers but only four outliers that significant included to the intervention model with MSE value 523,7167. So, the intervention model and outlier detection are chosen as a the best model based on the smallest MSE criterion. Keywords: Domestic tourists, intervention model, pulse function, outlier detection