Domain adaptation for object recognition using subspace sampling demons

  • PDF / 2,463,847 Bytes
  • 20 Pages / 439.642 x 666.49 pts Page_size
  • 41 Downloads / 215 Views

DOWNLOAD

REPORT


Domain adaptation for object recognition using subspace sampling demons Youshan Zhang1

· Brian D. Davison1

Received: 3 November 2019 / Revised: 26 May 2020 / Accepted: 13 July 2020 / © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Manually labeling data for training machine learning models is time-consuming and expensive. Therefore, it is often necessary to apply models built in one domain to a new domain. However, existing approaches do not evaluate the quality of intermediate features that are learned in the process of transferring from the source domain to the target domain, which results in the potential for sub-optimal features. Also, transfer learning models in existing work do not provide optimal results for a new domain. In this paper, we first propose a fast subspace sampling demons (SSD) method to learn intermediate subspace features from two domains and then evaluate the quality of the learned features. To show the applicability of our model, we test our model using a synthetic dataset as well as several benchmark datasets. Extensive experiments demonstrate significant improvements in classification accuracy over the state of the art. Keywords Domain adaptation · Object recognition · Subspace sampling

1 Introduction Modern society produces a huge amount of data in a variety of forms, e.g., text, image, audio, and video. Industry and the research community have a great demand for automatic classification and analysis of different forms of data [3, 9, 40]. However, it is time-consuming and expensive to acquire enough labeled data to train machine learning models from scratch. Therefore, it is valuable to learn a model for a new target domain from abundant labeled samples in a different existing domain. In addition, due to the differences between different domains, termed data bias or domain shift [25], machine learning models often do not generalize well from an existing domain to a novel unlabeled domain. To address the domain shift issue, mechanisms for extracting feature representations from a continuous intermediate space (between source and target representations) have been widely  Youshan Zhang

[email protected] Brian D. Davison [email protected] 1

Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA

Multimedia Tools and Applications

used in many tasks such as sentiment classification [3, 7], and object recognition [12, 13]. There are also several approaches to address the domain shift problem; a prominent one is domain adaptation [12, 18, 32]. There have been efforts made in domain adaptation for both semi-supervised [2, 5, 24] and unsupervised [4, 6, 23, 41, 42] variations of domain adaptation for the object recognition problem. We have completely labeled data in the source domain in both cases. In the former case, the target domain contains a small amount of labeled data, whereas, in the latter case, the target domain is entirely unlabeled. Often the small amount of labeled target data alone is insufficient to construct a good classifier bec