An integrated classification model for incremental learning
- PDF / 3,311,193 Bytes
- 16 Pages / 439.37 x 666.142 pts Page_size
- 17 Downloads / 251 Views
An integrated classification model for incremental learning Ji Hu 1 & Chenggang Yan 1 & Xin Liu 1 & Zhiyuan Li 1 & Chengwei Ren 1 & Jiyong Zhang 1 & Dongliang Peng 1 & Yi Yang 2 Received: 18 May 2020 / Revised: 25 August 2020 / Accepted: 7 October 2020 # Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract
Incremental Learning is a particular form of machine learning that enables a model to be modified incrementally, when new data becomes available. In this way, the model can adapt to the new data without the lengthy and time-consuming process required for complete model re-training. However, existing incremental learning methods face two significant problems: 1) noise in the classification sample data, 2) poor accuracy of modern classification algorithms when applied to modern classification problems. In order to deal with these issues, this paper proposes an integrated classification model, known as a Pre-trained Truncated Gradient Confidence-weighted (Pt-TGCW) model. Since the pre-trained model can extract and transform image information into a feature vector, the integrated model also shows its advantages in the field of image classification. Experimental results on ten datasets demonstrate that the proposed method outperform the original counterparts. Keywords Incremental learning . Transfer learning . Confidence weight . Image classification . Masked-face dataset
1 Introduction Classification tasks are widely used in image classification, personal credit evaluation, depiction of user portrait, and so on. These scenarios rely on streaming data to ensure lower latency. Under the premise of big data, it is necessary to use efficient incremental learning algorithms to process streaming data in these application scenarios. Clearly, these application scenarios require the highest possible accuracy. For these reasons, improving the accuracy of classification algorithms is an important problem [10] that must be addressed.
* Ji Hu [email protected] Extended author information available on the last page of the article
Multimedia Tools and Applications
At present, there are two methods to solve this problem, namely batch learning [8] and Online learning [19]. In recent years, the emergence of a large amount of data has exceeded the size of memory, and online learning algorithms have attracted attention in the field of machine learning. Many online incremental learning [18] algorithms have been proposed, such as the Perceptron algorithm [14], the Passive Aggressive algorithm (PA) [1, 9], the Online Gradient Descent algorithm (OGD) [3], the Stochastic Gradient Descent algorithm (SGD) [2], the Truncated Gradient algorithm (TG) [3], the Weight Adaptive Regularization algorithm [5, 13], and the Confidence-Weighted algorithm (CW) [4, 6, 7]. However, some of these, such as the Perceptron and the OGD algorithms, effectively add noise to the model during each update, resulting in the inability to converge, or low and unstable convergence efficiency. Other algorithms cannot produce a sparse solution wh
Data Loading...