Classification studies and maps each attribute in one of the predetermined classes. K-NN has several drawbacks such as high computational load in conducting training data, large memory when implemented. The selection of distance metrics and data pre-processing does not affect the increase in accuracy, but in this study the Euclidean distance metric is better than Manhattan in increasing accuracy. Finding the optimal number of neighbors varies between different distances, computation takes a long time. High noise for smaller k, the higher the value of k, the lower the accuracy and the smaller the computation time. K odd or even does not affect the high or low accuracy, but does affect the computation time.