Claim Missing Document
Check
Articles

Found 4 Documents
Search
Journal : Jurnal Ipteks Terapan : research of applied science and education

ANALISIS VARIATION K-FOLD CROSS VALIDATION ON CLASSIFICATION DATA METHOD K-NEAREST NEIGHBOR Ridha Maya Faza Lubis; Zakarias Situmorang; Rika Rosnelly
Jurnal Ipteks Terapan (Research Of Applied Science And Education ) Vol. 14 No. 3 (2020): Re Publish Issue
Publisher : Lembaga Layanan Pendidikan Tinggi Wilayah X

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (375.392 KB) | DOI: 10.22216/jit.v14i3.98

Abstract

To produce a data classification that has data accuracy or similarity in proximity of a measurement result to the actual numbers or data, testing can be done based on accuracy with test data parameters and training data determined by Cross Validation. Therefore data accuracy is very influential on the final result of data classification because when data accuracy is inaccurate it will affect the percentage of test data grouping and training data. Whereas in the K-Nearest Neighbor method there is no division of training data and test data. For this reason, researchers analyzed the determination of training data and test data using the Cross validation algorithm and K-Nearest Neighbor in data classification. The results of the study are based on the results of the evaluation of the Cross Validation algorithm on the effect of the number of K in the K-nearest Neighbor classification of data. The author tests using variations in the value of K K-Nearest Neighbor 3,4,5,6,7,8,9. While the training and test data distribution using Cross validation uses variations in the number of K-Fold 1,2,3,4,5,6,7,8,9,10
COMPARATIVE OF ID3 AND NAIVE BAYES IN PREDICTID INDICATORS OF HOUSE WORTHINESS Ade Clinton Sitepu; Wanayumini -; Zakarias Situmorang
Jurnal Ipteks Terapan (Research Of Applied Science And Education ) Vol. 14 No. 3 (2020): Re Publish Issue
Publisher : Lembaga Layanan Pendidikan Tinggi Wilayah X

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (591.22 KB) | DOI: 10.22216/jit.v14i3.99

Abstract

Decision making is method of solving problems using certain way / techniques so that can beaccepted. After making some calculations and considerations through several stages, the decisionhave taken that decision maker goes through. This stage will be selected until the best decision hasmade. Decision-making aims to solve problems that solve problems so that decisions with finalgoals can be implemented properly and effectively. This study uses a simulation of decision makingfrom seven attributes to the proportion of the feasibility of a house based on data from CentralStatistics Agency (BPS). There are several techniques for presenting decision making including: ID3(decision tree) algorithm concept and Naïve Bayes algorithm. Both classification are learningsuperviseddata grouping. ID3 algorithm depicts the relationship in the form of a tree diagramwhereas Naïve Bayes makes use of probability calculations and statistics. As a result, in datatraining, decision trees are able to model decision making more accurately. The prediction resultsusing the decision tree model = 90.90%, while Naïve Bayes = 72.73%. Meanwhile, the speed of theNaive Bayes algorithm is better
ANALISIS VARIATION K-FOLD CROSS VALIDATION ON CLASSIFICATION DATA METHOD K-NEAREST NEIGHBOR Ridha Maya Faza Lubis; Zakarias Situmorang; Rika Rosnelly
Jurnal Ipteks Terapan (Research Of Applied Science And Education ) Vol. 14 No. 3 (2020): Re Publish Issue
Publisher : Lembaga Layanan Pendidikan Tinggi Wilayah X

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (375.392 KB) | DOI: 10.22216/jit.v14i3.98

Abstract

To produce a data classification that has data accuracy or similarity in proximity of a measurement result to the actual numbers or data, testing can be done based on accuracy with test data parameters and training data determined by Cross Validation. Therefore data accuracy is very influential on the final result of data classification because when data accuracy is inaccurate it will affect the percentage of test data grouping and training data. Whereas in the K-Nearest Neighbor method there is no division of training data and test data. For this reason, researchers analyzed the determination of training data and test data using the Cross validation algorithm and K-Nearest Neighbor in data classification. The results of the study are based on the results of the evaluation of the Cross Validation algorithm on the effect of the number of K in the K-nearest Neighbor classification of data. The author tests using variations in the value of K K-Nearest Neighbor 3,4,5,6,7,8,9. While the training and test data distribution using Cross validation uses variations in the number of K-Fold 1,2,3,4,5,6,7,8,9,10
COMPARATIVE OF ID3 AND NAIVE BAYES IN PREDICTID INDICATORS OF HOUSE WORTHINESS Ade Clinton Sitepu; Wanayumini -; Zakarias Situmorang
Jurnal Ipteks Terapan Vol. 14 No. 3 (2020): Re Publish Issue
Publisher : Lembaga Layanan Pendidikan Tinggi Wilayah X

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (591.22 KB) | DOI: 10.22216/jit.v14i3.99

Abstract

Decision making is method of solving problems using certain way / techniques so that can beaccepted. After making some calculations and considerations through several stages, the decisionhave taken that decision maker goes through. This stage will be selected until the best decision hasmade. Decision-making aims to solve problems that solve problems so that decisions with finalgoals can be implemented properly and effectively. This study uses a simulation of decision makingfrom seven attributes to the proportion of the feasibility of a house based on data from CentralStatistics Agency (BPS). There are several techniques for presenting decision making including: ID3(decision tree) algorithm concept and Naïve Bayes algorithm. Both classification are learningsuperviseddata grouping. ID3 algorithm depicts the relationship in the form of a tree diagramwhereas Naïve Bayes makes use of probability calculations and statistics. As a result, in datatraining, decision trees are able to model decision making more accurately. The prediction resultsusing the decision tree model = 90.90%, while Naïve Bayes = 72.73%. Meanwhile, the speed of theNaive Bayes algorithm is better