Learning to Learn: Model Regression Networks for Easy Small Sample Learning
We develop a conceptually simple but powerful approach that can learn novel categories from few annotated examples. In this approach, the experience with already learned categories is used to facilitate the learning of novel classes. Our insight is two-fo
- PDF / 885,558 Bytes
- 19 Pages / 439.37 x 666.142 pts Page_size
- 66 Downloads / 328 Views
Abstract. We develop a conceptually simple but powerful approach that can learn novel categories from few annotated examples. In this approach, the experience with already learned categories is used to facilitate the learning of novel classes. Our insight is two-fold: (1) there exists a generic, category agnostic transformation from models learned from few samples to models learned from large enough sample sets, and (2) such a transformation could be effectively learned by high-capacity regressors. In particular, we automatically learn the transformation with a deep model regression network on a large collection of model pairs. Experiments demonstrate that encoding this transformation as prior knowledge greatly facilitates the recognition in the small sample size regime on a broad range of tasks, including domain adaptation, fine-grained recognition, action recognition, and scene classification. Keywords: Small sample learning · Transfer learning · Object recognition · Model transformation · Deep regression networks
1
Motivation
Over the past decade, large-scale object recognition has achieved high performance levels due to the integration of powerful machine learning techniques with big annotated training data sets [38,51,52,62,79,83,84]. In practical applications, however, training examples are often expensive to acquire or otherwise scarce [30]. Visual phenomena follow a long-tail distribution, in which a few subcategories are common while many are rare with limited training data even in the big-data setting [105,106]. More crucially, current recognition systems assume a set of categories known a priori, despite the obviously dynamic and open nature of the visual world [12,32,64,96]. Such scenarios of learning novel categories from few examples pose a multitude of open challenges for object recognition in the wild. For instance, when operating in natural environments, robots are supposed to recognize unfamiliar objects after seeing only few examples [50]. Humans are remarkably able to grasp a new category and make meaningful generalization to novel instances from just a short exposure to a single example [30,81]. By contrast, typical machine learning tools require tens, hundreds, or thousands of training examples and often break down for small sample learning [7,40]. c Springer International Publishing AG 2016 B. Leibe et al. (Eds.): ECCV 2016, Part VI, LNCS 9910, pp. 616–634, 2016. DOI: 10.1007/978-3-319-46466-4 37
Model Regression Networks for Easy Small Sample Learning
617
Fig. 1. Our main hypothesis is that there exists a generic, category agnostic transformation T from classifiers w0 learned from few annotated samples (represented as blue) to the underlying classifiers w∗ learned from large sets of samples (represented as red). We estimate the transformation T by learning a deep regression network on a large collection of model pairs, i.e., a model regression network. For a novel category/task (such as scene classification and fine-grained object recognition), we introduce the learned T to construct the t
Data Loading...