A Novel Angle-Based Learning Framework on Semi-supervised Dimensionality Reduction in High-Dimensional Data with Applica
- PDF / 1,874,865 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 12 Downloads / 180 Views
RESEARCH ARTICLE-COMPUTER ENGINEERING AND COMPUTER SCIENCE
A Novel Angle-Based Learning Framework on Semi-supervised Dimensionality Reduction in High-Dimensional Data with Application to Action Recognition Zahra Ramezani1 · Ahmad Pourdarvish1 · Kiumars Teymourian2 Received: 13 January 2020 / Accepted: 13 August 2020 © King Fahd University of Petroleum & Minerals 2020
Abstract The existing outliers in high-dimensional data create various challenges to classify datasets such as the exact classification with imbalanced scatters. In this paper, we propose an angle-based framework as Angle Global and Local Discriminant Analysis (AGLDA) to consider imbalanced scatters. AGLDA chooses an optimal subspace by using angle cosine to achieve appropriate scatter balance in the dataset. The privilege of this method is to classify datasets with the effect of outliers by finding optimal subspace in high-dimensional data. Generally, this method is more effective and more reliable than other methods to classify data when there are outliers. Besides, human posture classification has been used as an application of the balanced semi-supervised dimensionality reduction to assist human factor experts and designers of industrial systems for diagnosing the type of maintenance crew postures. The experimental results show the efficiency of the proposed method via two real case studies, and the results have also been verified by comparing it with other approaches. Keywords High-dimensional data · Dimensionality reduction · Human factor · Angle-based discriminant · Scatter balance
1 Introduction The growth of information in multimedia data streams (images and video) has become a challenging problem to achieve suitable features in the high-dimensional data for diagnosing postures [1–4]. The dimensionality reduction methods are a solution for this problem, but exiting outlier classes cause no enough accuracy and also lose necessary information. To classify high-dimensional data [5–7], two important issues should be considered in advance. The first is the effect of outlier classes for the achievement of optimal subspace,
B
Ahmad Pourdarvish [email protected] Zahra Ramezani [email protected] Kiumars Teymourian [email protected]
1
Department of Statistics, University of Mazandaran, Babolsar, Iran
2
Department of Civil, Environmental and Natural Resources Engineering, Operation, Maintenance and Acoustics, Luleå University of Technology, Luleå, Sweden
and the other is the sparsity issue in the projection matrix for eigenvalue-based dimensionality reduction techniques [8]. In recent years, researchers have tried to consider the effect of outlier classes for optimal subspace identification. In this case, a subspace selection algorithm has been proposed based on the geometric mean to reduce the outliers’ influence on the subspace selection [9]. In addition, a robust Fractional-step Linear Discriminant Analysis algorithm (F-LDA) to outliers has been proposed [10] in which the dimensionality reduction is applied in a few fraction
Data Loading...