When do we employ knearest neighbor algorithm and how is it performed in SPSS research package?
knearest neighbor algorithmWhen do we employ knearest neighbor algorithm and how is it performed in SPSS research package? 1 comment to knearest neighbor algorithmLeave a Reply to emmanuel Cancel reply 

The knearest neighbor algorithm (kNN) is a method for classifying objects based on closest training examples in the feature space. kNN is a type of instancebased learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification. The knearest neighbor algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of its nearest neighbor.
The same method can be used for regression, by simply assigning the property value for the object to be the average of the values of its k nearest neighbors. It can be useful to weight the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. (A common weighting scheme is to give each neighbor a weight of 1/d, where d is the distance to the neighbor. This scheme is a generalization of linear interpolation.)
The neighbors are taken from a set of objects for which the correct classification (or, in the case of regression, the value of the property) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. The knearest neighbor algorithm is sensitive to the local structure of the data.
The link below gives you a video of how to perform knearest neighbor algorithm in SPSS.
spss.com/media/demos/statistics/demostatsnearestneigh/index.htm