Kernel Matching Reduction Algorithms for Classification

Inspired by kernel matching pursuit (KMP) and support vector machines (SVMs), we propose a novel classification algorithm: kernel matching reduction algorithm (KMRA). This method selects all training examples to construct a kernel-based functions dictiona

  • PDF / 342,769 Bytes
  • 8 Pages / 430 x 660 pts Page_size
  • 57 Downloads / 211 Views

DOWNLOAD

REPORT


Abstract. Inspired by kernel matching pursuit (KMP) and support vector machines (SVMs), we propose a novel classification algorithm: kernel matching reduction algorithm (KMRA). This method selects all training examples to construct a kernel-based functions dictionary. Then redundant functions are removed iteratively from the dictionary, according to their weights magnitudes, which are determined by linear support vector machines (SVMs). During the reduction process, the parameters of the functions in the dictionary can be adjusted dynamically. Similarities and differences between KMRA and several other machine learning algorithms are also addressed. Experimental results show KMRA can have sparser solutions than SVMs, and can still obtain comparable classification accuracies to SVMs. Keywords: Kernel matching reduction algorithms, Kernel matching pursuit, Support vector machines, Radial basis function neural networks.

1

Introduction

Kernel-based pattern classification techniques have been widely used, especially during the past decade [1]. The SVM approach, proposed by Vapnik [2], is the representative one, which can achieve state-of-the-art performance for many classification problems, and uses only a fraction of the set of all training examples to form solutions. Although SVMs can get sparse expressions for classification, some techniques have been proposed to make solutions much sparser. In [3], Kernel matching pursuit (KMP) is introduced to learn a weighted sum of basis functions from a kernel-based dictionary, and can produce much sparser models than SVMs. KMP appends functions to an initially empty basis sequentially, from a redundant dictionary of functions, to approximate a classification function by using a certain loss criterion. The basic matching pursuit algorithm, as well as its two refinements: back-fitting and pre-fitting, are described in [3]. To make KMP practical for large datasets, a stochastic version is proposed as an approximation of the original KMP [4]. Different from KMP and its variants as mentioned above, kernel matching reduction algorithms (KMRAs), are proposed to perform a reverse procedure G. Wang et al. (Eds.): RSKT 2008, LNAI 5009, pp. 564–571, 2008. c Springer-Verlag Berlin Heidelberg 2008 

Kernel Matching Reduction Algorithms for Classification

565

in this paper. Firstly, all training examples are selected to construct a function dictionary. Then the function dictionary is reduced iteratively by linear support vector machines (SVMs). During the reduction process, the parameters of the functions in the dictionary can be adjusted dynamically. The rest of this paper is organized as follows. KMRAs are described in section 2. Compared are similarities and differences between KMRAs and other machine learning algorithms, such as KMP, SVMs, hidden space SVMs (HSSVMs), and radial basis function neural networks (RBFNN). Experimental results are presented in Section 4, then some conclusions and further thoughts are given in the last section.

2

Kernel Matching Reduction Algorithms (KMRAs)

Inspired