Retracted: A Novel Voting Mathematical Rule Classification for Image Recognition
In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern
- PDF / 1,699,776 Bytes
- 14 Pages / 439.37 x 666.142 pts Page_size
- 80 Downloads / 204 Views
Abstract. In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of number of nearest neighbors i.e. ‘k’ for computation. At present it is hard to find the optimal value of ‘k’ using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. Proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. Proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard dataset taken from UCI database and one manually created dataset. Experimental result shows that the proposed AVNM rule outperforms KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. Proposed AVNM rule is based on sample space reduction mechanism for identification of optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with state-of-art algorithm, KNN and its variants. Proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manual created dataset. Keywords: KNN
Classification Lazy learning
© Springer International Publishing Switzerland 2016 O. Gervasi et al. (Eds.): ICCSA 2016, Part V, LNCS 9790, pp. 257–270, 2016. DOI: 10.1007/978-3-319-42092-9_20
258
S. Abbasi et al.
1 Introduction Machine learning is the branch of engineering discipline that sightsees the description and learns through algorithmic perspective using data. The machine learning algorithms use the input pattern of the sample data to learn and create their knowledge base. Using their pre-stored knowledge base and algorithmic computation, machine learning based systems help to identify the undiscovered sample to its corresponding class. One of the most widely used non-parametric classifier is Nearest Neighbor (NN) rule proposed by the Fix and Hodges [1] in 1951 for pattern analysis. NN rule is simple and most effective among all non-parametric classifiers. Besides its simplicity, it is also considered as lazy classifier, because it doesn’t use the training set for any type of generalization. In 1967, K-Nearest Neighbor was proposed by Cover and Hart [2] which used ‘k’ nearest samples for decision makin
Data Loading...