Substep active deep learning framework for image classification
- PDF / 1,066,808 Bytes
- 12 Pages / 595.276 x 790.866 pts Page_size
- 112 Downloads / 247 Views
THEORETICAL ADVANCES
Substep active deep learning framework for image classification Guoqiang Li1 · Ning Gong1 Received: 3 July 2019 / Accepted: 9 June 2020 © Springer-Verlag London Ltd., part of Springer Nature 2020
Abstract In image classification, the acquisition of images labels is often expensive and time-consuming. To reduce this labeling cost, active learning is introduced into this field. Although some active learning algorithms have been proposed, they are all single-sampling strategies or combined with multiple-sampling strategies simultaneously (i.e., correlation, uncertainty and label-based measure), without considering the relationship between substep sampling strategies. To this end, we designed a new active learning scheme called substep active deep learning (SADL) for image classification. In SADL, samples were selected by correlation strategy and then determined by the uncertainty and label-based measurement. Finally, it is fed to CNN model training. Experiments were performed with three data sets (i.e., MNIST, Fashion-MNIST and CIFAR-10) to compare against state-of-the-art active learning algorithms, and it can be verified that our substep active deep learning is rational and effective. Keywords Convolutional neural network · Active learning · Substep · Image classification
1 Introduction In computer vision and machine learning, image classification is an important task [1–4]. To solve this problem, various machine learning algorithms have been proposed, such as support vector machine (SVM) [5], k-means clustering [6] and Bayesian networks [7]. In these algorithms, convolutional neural networks (CNNs) have shown outstanding performance for solving image classification and make it enter a new era [8]. However, as the depth of CNNs’ architecture, CNNs require more and more labeled data to train their parameters [9, 10], and label acquisition is timeconsuming and expensive in real applications [11, 12]. For this, the active learning has been introduced to label samples with minimum cost. Active learning [13, 14] is always employed to select the most important samples for classifiers. In active learning, a small number of labeled samples were selected as the initial training set, and then, the more important unlabeled samples were determined and selected from the unlabeled pool by the sampling strategy to update the training set. The above * Ning Gong [email protected] 1
Key Laboratory of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, China
process would stop until the unlabeled sample set is empty or the performance achieved an indication. The original sampling strategy, such as uncertainty [15], loss minimization [16], variance reduction [17] and diversity measure [18], was used in active learning. However, the above sampling strategies were thought of single-sampling technologies without considering the relationship between samples. For these problems, Huang proposed active learning by querying informative and representative examples [19]. Li
Data Loading...