Multi-Task Learning with Group-Specific Feature Space Sharing
When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance. (MTL) exploits the latent relations between tasks and overcomes data scarcity limitat
- PDF / 355,199 Bytes
- 17 Pages / 439.37 x 666.142 pts Page_size
- 24 Downloads / 203 Views
Department of Electrical Engineering and Computer Science, University of Central Florida, 4000 Central Florida Blvd., Orlando, FL 32816, USA {niloofar.yousefi,michaelg}@ucf.edu 2 Department of Electrical and Computer Engineering, Florida Institute of Technology, 150 W. University Blvd., Melbourne, FL 32901, USA [email protected]
Abstract. When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance. (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. We propose a novel Multi-Task Multiple Kernel Learning framework based on Support Vector Machines for binary classification tasks. By considering pair-wise task affinity in terms of similarity between a pair’s respective feature spaces, the new framework, compared to other similar MTL approaches, offers a high degree of flexibility in determining how similar feature spaces should be, as well as which pairs of tasks should share a common feature space in order to benefit overall performance. The associated optimization problem is solved via a block coordinate descent, which employs a consensus-form Alternating Direction Method of Multipliers algorithm to optimize the Multiple Kernel Learning weights and, hence, to determine task affinities. Empirical evaluation on seven data sets exhibits a statistically significant improvement of our framework’s results compared to the ones of several other Clustered Multi-Task Learning methods.
1
Introduction
Multi-Task Learning (MTL) is a machine learning paradigm, where several related task are learnt simultaneously with the hope that, by sharing information among tasks, the generalization performance of each task will be improved. The underlying assumption behind this paradigm is that the tasks are related to each other. Thus, it is crucial how to capture task relatedness and incorporate it into an MTL framework. Although, many different MTL methods [1,7,12,15,18,27] have been proposed, which differ in how the relatedness across multiple tasks is modeled, they all utilize the parameter or structure sharing strategy to capture the task relatedness. However, the previous methods are restricted in the sense that they assume all tasks are similarly related to each other and can equally contribute to the joint c Springer International Publishing Switzerland 2015 A. Appice et al. (Eds.): ECML PKDD 2015, Part II, LNAI 9285, pp. 120–136, 2015. DOI: 10.1007/978-3-319-23525-7 8
Multi-Task Learning with Group-Specific Feature Space Sharing
121
learning process. This assumption can be violated in many practical applications as “outlier” tasks often exist. In this case, the effect of “negative transfer”, i.e., sharing information between irrelevant tasks, can lead to a degraded generalization performance. To address this issue, several methods, along different directions, have been proposed to discover the inherent relationship among
Data Loading...