Efficient implicit Lagrangian twin parametric insensitive support vector regression via unconstrained minimization probl

  • PDF / 1,769,191 Bytes
  • 32 Pages / 439.37 x 666.142 pts Page_size
  • 22 Downloads / 193 Views

DOWNLOAD

REPORT


Efficient implicit Lagrangian twin parametric insensitive support vector regression via unconstrained minimization problems Deepak Gupta 1

& Bharat Richhariya

1

# Springer Nature Switzerland AG 2020

Abstract

In this paper, an efficient implicit Lagrangian twin parametric insensitive support vector regression is proposed which leads to a pair of unconstrained minimization problems, motivated by the works on twin parametric insensitive support vector regression (Peng: Neurocomputing. 79, 26–38, 2012), and Lagrangian twin support vector regression (Balasundaram and Tanveer: Neural Comput. Applic. 22(1), 257–267, 2013). Since its objective function is strongly convex, piece-wise quadratic and differentiable, it can be solved by gradient-based iterative methods. Notice that its objective function having nonsmooth ‘plus’ function, so one can consider either generalized Hessian, or smooth approximation function to replace the ‘plus’ function and further apply the simple Newton-Armijo step size algorithm. These algorithms can be easily implemented in MATLAB and do not require any optimization toolbox. The advantage of this method is that proposed algorithms take less training time and can deal with data having heteroscedastic noise structure. To demonstrate the effectiveness of the proposed method, computational results are obtained on synthetic and real-world datasets which clearly show comparable generalization performance and improved learning speed in accordance with support vector regression, twin support vector regression, and twin parametric insensitive support vector regression. Keywords Support vector regression . Twin support vector regression . Regression estimation . Parametric insensitive model . Unconstrained convex minimization

* Deepak Gupta [email protected]; [email protected] Bharat Richhariya [email protected]

1

Department of Computer Science & Engineering, National Institute of Technology, Arunachal Pradesh 791112, India

Gupta D., Richhariya B.

1 Introduction Support Vector Machine (SVM) proposed by Cortes et al. [6] has become a prominent technique for both classification and regression problems in the last two decades. Its superiority over other techniques is due to its global minimum approach, which gives a unique solution to the optimization problem. Due to its low VC dimension, the generalization performance of SVM is also very high as compared to other techniques like artificial neural network (ANN). One of the most important aspects of SVM is that its VC dimension or the number of free parameters is quite less even for data with large size. SVM is used in variety of applications in many fields ranging from pattern recognition [28], drug discovery [8], braincomputer interface [13], sediment load prediction in rivers [1, 5], financial time series forecasting [25, 26], bankruptcy prediction [39] to facial expression recognition [35]. Although it owns better generalization ability for classification problems in comparison to other well known machine learning methods, one of