Fast gradient descent algorithm for image classification with neural networks
- PDF / 913,276 Bytes
- 8 Pages / 595.276 x 790.866 pts Page_size
- 88 Downloads / 288 Views
ORIGINAL PAPER
Fast gradient descent algorithm for image classification with neural networks Abdelkrim El Mouatasim1 Received: 11 November 2019 / Revised: 6 March 2020 / Accepted: 18 April 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract Any optimization of gradient descent methods involves selecting a learning rate. Tuning the learning rate can quickly become repetitive with deeper models of image classification, does not necessarily lead to optimal convergence. We proposed in this paper, a modification of the gradient descent algorithm in which the Nestrove step is added, and the learning rate is update in each epoch. Instead, we learn learning rate itself, either by Armijo rule, or by control step. Our algorithm called fast gradient descent (FGD) for solving image classification with neural networks problems, the quadratic convergence rate o(k 2 ) of FGD algorithm are proved. FGD algorithm are applicate to a MNIST dataset. The numerical experiment, show that our approach FGD algorithm is faster than gradient descent algorithms. Keywords Gradient algorithm · Nesterov algorithm · Learning rate control · Image classification · Neural networks
1 Introduction Computer vision helps machines to view and comprehend digital images, something that humans can do autonomously to high accuracy levels. Image processing is an important computer vision field, with major real-world applications such as autonomous vehicles [2], industry [15], medical diagnosis [19], and face recognition [21]. Image processing has many tasks, such as:regularization [7,8], clustering [13], localization [6] and classification [9]. In this paper we concerned by image classification that is the process of assigning a class label to an image. Typically, the current state of the art solution to image classification uses artificial neural network [19], convolutionary neural network [17] and deep neural network [11,14], but this approaches has limitations such as the need for carefully designed structures and poor interpretability. The training algorithms used for solving image classification are gradient descent algorithms [4,15,19] and genetic algorithm [9]. However, the gradient descent algorithms and stochastic algorithms are easy to converge into the local minimum slowly. In fact, it is possible to divide the image classification
B 1
Abdelkrim El Mouatasim [email protected]
into two phases: extraction and classification of the feature. It is possible to perform two stages in the order. It can avoid simultaneously adjusting the parameters of the entire network and reducing the parameter adjustment difficulty. Based on the above considerations, an improved gradient descent method are proposed, called stochastic gradient descent algorithm (SGD). SGD combines the advantages of the gradient descent algorithms, back propagation [4] and stochastic strategy, and it is used as a training algorithm of the classifier such as Nestrov accelerate gradient (NAG) [3]. In this paper, we propose a fast iterative algorithm for image
Data Loading...