Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach
- PDF / 1,733,211 Bytes
- 22 Pages / 439.37 x 666.142 pts Page_size
- 86 Downloads / 161 Views
Few-shot learning with adaptively initialized task optimizer: a practical meta-learning approach Han-Jia Ye1
· Xiang-Rong Sheng1 · De-Chuan Zhan1
Received: 4 May 2019 / Revised: 12 July 2019 / Accepted: 6 September 2019 © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019
Abstract Considering the data collection and labeling cost in real-world applications, training a model with limited examples is an essential problem in machine learning, visual recognition, etc. Directly training a model on such few-shot learning (FSL) tasks falls into the over-fitting dilemma, which would turn to an effective task-level inductive bias as a key supervision. By treating the few-shot task as an entirety, extracting task-level pattern, and learning a task-agnostic model initialization, the model-agnostic meta-learning (MAML) framework enables the applications of various models on the FSL tasks. Given a training set with a few examples, MAML optimizes a model via fixed gradient descent steps from an initial point chosen beforehand. Although this general framework possesses empirically satisfactory results, its initialization neglects the task-specific characteristics and aggravates the computational burden as well. In this manuscript, we propose our AdaptiVely InitiAlized Task OptimizeR (Aviator) approach for few-shot learning, which incorporates task context into the determination of the model initialization. This task-specific initialization facilitates the model optimization process so that it obtains high-quality model solutions efficiently. To this end, we decouple the model and apply a set transformation over the training set to determine the initial top-layer classifier. Re-parameterization of the first-order gradient descent approximation promotes the gradient back-propagation. Experiments on synthetic and benchmark data sets validate that our Aviator approach achieves the state-of-the-art performance, and visualization results demonstrate the task-adaptive features of our proposed Aviator method. Keywords Few-shot learning · Meta-learning · Supervised-learning · Multi-task learning · Task-specific
Editors: Kee-Eung-Kim and Jun Zhu.
B
De-Chuan Zhan [email protected] Han-Jia Ye [email protected]
1
Nanjing University, Nanjing, China
123
Machine Learning
1 Introduction Although modern machine learning approaches achieve remarkable improvements in various real-world fields such as visual recognition (Krizhevsky et al. 2017), one of the key elements towards constructing such helpful models is a large training set (Russakovsky et al. 2015; Su et al. 2018). Taking the instance collection and labeling cost into consideration, learning “rich” knowledge from “small” data is necessary and important. For example, images of rare species are hard to collect, thus the model should be able to do visual recognition based on single or a few reference examples (Wang et al. 2018b); it is cumbersome to require a user recording multiple facial expressions into a system
Data Loading...