NBWELM: naive Bayesian based weighted extreme learning machine
- PDF / 832,412 Bytes
- 15 Pages / 595.276 x 790.866 pts Page_size
- 45 Downloads / 173 Views
ORIGINAL ARTICLE
NBWELM: naive Bayesian based weighted extreme learning machine Jing Wang • Lin Zhang Di Han
•
Juan-juan Cao
•
Received: 15 May 2014 / Accepted: 24 November 2014 Ó Springer-Verlag Berlin Heidelberg 2014
Abstract Weighted extreme learning machines (WELMs) aim to find the better tradeoff between empirical and structural risks, so they obtain the good generalization performances, especially when using them to deal with the imbalance classification problems. The existing weighting strategies assign the distribution-independent weight matrices for WELMs, i.e., the weights do not consider the probabilistic information of samples. This causes that WELM strengthens the affect of outliers to some extent. In this paper, a naive Bayesian based WELM (NBWELM) is proposed, in which the weight is determined with the flexible naive Bayesian (FNB) classifier. Through calculating the posterior probability of sample, NBWELM cannot only handle the outliers effectively but also consider two different weighting information i.e., the training error in weighted regularized ELM (WRELM) and class distribution in Zong et al.’s WELM (ZWELM), synchronously. The experimental results on 45 KEEL and UCI datasets show that our proposed NBWELM can further improve the J. Wang (&) Modern Education Technology Center, Hebei Institute of Physical Education, Shijiazhuang 050041, China e-mail: [email protected] L. Zhang Department of Sports, Xingtai University, Xingtai 054001, China J. Cao College of Mathematics and Information Technology, Xingtai University, Xingtai 054001, China D. Han Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long,Taipa, Macau, China e-mail: [email protected]
generalization capability of WELM and thus obtain a higher classification accuracy than WRELM and ZWELM. Meanwhile, NBWELM does not remarkably increase the computational complexity of WELM due to the simplicity of FNB. Keywords Extreme learning machine Naive Bayesian Imbalance classification
1 Introduction Extreme learning machine (ELM) [10, 11] is a special single-hidden layer feedforward neural network (SLFN) where the input weights and hidden layer biases are randomly chose and output weights are analytically determined. Due to avoid the iterative tuning to weights, ELM has the extremely fast training speed [5, 22]. Meanwhile, the universal approximate capability of ELM has also been theoretically proved [8]. Although the existing studies demonstrate the better performance in different applications, e.g., security assessment [24], data privacy [20], image quality assessment [1], face recognition [3] and online sequential learning [18], etc., there are still several issues can be considered so as to further improve the ELMs’ generalization capability, e.g., tending to obtain an over-fitting model due to only consider the empirical risk [4] and weakening the impact of the minority samples when dealing with imbalance classification [28]. Therefor, a new kind of improvement to ELMs named weighted ELMs
Data Loading...