Abundant Inverse Regression Using Sufficient Reduction and Its Applications

Statistical models such as linear regression drive numerous applications in computer vision and machine learning. The landscape of practical deployments of these formulations is dominated by forward regression models that estimate the parameters of a func

  • PDF / 2,305,645 Bytes
  • 15 Pages / 439.37 x 666.142 pts Page_size
  • 81 Downloads / 167 Views

DOWNLOAD

REPORT


Abstract. Statistical models such as linear regression drive numerous applications in computer vision and machine learning. The landscape of practical deployments of these formulations is dominated by forward regression models that estimate the parameters of a function mapping a set of p covariates, x, to a response variable, y. The less known alternative, Inverse Regression, offers various benefits that are much less explored in vision problems. The goal of this paper is to show how Inverse Regression in the “abundant” feature setting (i.e., many subsets of features are associated with the target label or response, as is the case for images), together with a statistical construction called Sufficient Reduction, yields highly flexible models that are a natural fit for model estimation tasks in vision. Specifically, we obtain formulations that provide relevance of individual covariates used in prediction, at the level of specific examples/samples — in a sense, explaining why a particular prediction was made. With no compromise in performance relative to other methods, an ability to interpret why a learning algorithm is behaving in a specific way for each prediction, adds significant value in numerous applications. We illustrate these properties and the benefits of Abundant Inverse Regression on three distinct applications. Keywords: Inverse regression · Kernel regression sion · Temperature prediction · Alzheimer’s disease

1

· Abundant regres· Age estimation

Introduction

Regression models are ubiquitous in computer vision applications (e.g., medical imaging [1] and face alignment by shape regression [2]). In scientific data analysis, regression models are the default tool of choice for identifying the association between a set of input feature vectors (covariates) x ∈ X and an output (dependent) variable y ∈ Y. In most applications, the regressor is obtained by minimizing (or maximizing) the loss (or fidelity) function assuming the dependent variable y is corrupted with noise : y = f (x) + . Consequently, solving for the regressor is, Hyunwoo J. Kim and Brandon M. Smith are joint first authors. c Springer International Publishing AG 2016  B. Leibe et al. (Eds.): ECCV 2016, Part III, LNCS 9907, pp. 570–584, 2016. DOI: 10.1007/978-3-319-46487-9 35

Abundant Inverse Regression

Actual 25.0o C Est. 23.6o C

Actual −7.2o C Est. −3.7o C

Actual 20 Est. 16

571

Actual 74 Est. 69

Fig. 1. Dynamic feature weights for two tasks: ambient temperature prediction (left) and age estimation (right). Our formulation provides a way to determine, at test time, which features are most important to the prediction. Our results are competitive, which demonstrates that we achieve this capability without sacrificing accuracy. (Color figure online)

in fact, equivalent to estimating the expectation E[y|x]; in statistics and machine learning, this construction is typically referred to as forward (or standard) regression [3]. The above formulation does not attempt to model noise in x directly. Even for linear forms of f (·), if the noise characteristics