View-Invariant Gait Recognition Using a Joint-DLDA Framework

In this paper, we propose a new view-invariant framework for gait analysis. The framework profits from the dimensionality reduction advantages of Direct Linear Discriminant Analysis (DLDA) to build a unique view-invariant model. Among these advantages is

  • PDF / 1,358,807 Bytes
  • 11 Pages / 439.37 x 666.142 pts Page_size
  • 98 Downloads / 159 Views

DOWNLOAD

REPORT


2

Postgraduate and Research Section ESIME Culhuacan, Instituto Politecnico Nacional, Mexico City, Mexico [email protected] Department of Computer Science, University of Warwick, Coventry, UK

Abstract. In this paper, we propose a new view-invariant framework for gait analysis. The framework profits from the dimensionality reduction advantages of Direct Linear Discriminant Analysis (DLDA) to build a unique view-invariant model. Among these advantages is the capability to tackle the under-sampling problem (USP), which commonly occurs when the number of dimensions of the feature space is much larger than the number of training samples. Our framework employs Gait Energy Images (GEIs) as features to create a single joint model suitable for classification of various angles with high accuracy. Performance evaluations shows the advantages of our framework, in terms of computational time and recognition accuracy, as compared to state-of-the-art view-invariant methods. Keywords: Gait recognition · View-invariant Direct Linear Discriminant Analysis

1

·

Gait Energy Image

·

Introduction

Person identification through gait analysis using appearance-based approaches has gained considerable importance over the last few years. These approaches, which do not rely on structural models of the human walking, have been shown to attain a high recognition accuracy with low computational cost by extracting information from simple moving silhouettes [6]. However, several factors may hinder their recognition performance. Among these factors are clothes, footwear, carrying objects, walking surfaces, time elapsed, and the view angle. The later, which is defined as the angle between an optical axis and the walking direction [14], may have a big impact on the performance as a most of the appearancebased approaches rely on a fixed view angle [24]. In this paper, we present an appearance-based framework for gait recognition that tackles the challenges associated with different view angles. Our framework is based on subspace learning and employs Gait Energy Images (GEIs) as features. Specifically, it employs Direct Linear Discriminate Analysis (DLDA) to c Springer International Publishing Switzerland 2016  H. Fujita et al. (Eds.): IEA/AIE 2016, LNAI 9799, pp. 398–408, 2016. DOI: 10.1007/978-3-319-42007-3 34

View-Invariant Gait Recognition Using a Joint-DLDA Framework

399

create a single model for classification. To this end, we employ as training data GEIs computed from raw sequences captured at several view angles. We call this framework Joint-DLDA. The main novelties of Joint-DLDA are as follows: 1. No need to create independent models for classification at different view angles. This is particular useful in practical situations, where it is common that the probe data is captured at an angle not present in the gallery data. A unique model for classification of several angles can handle these situations. 2. Ability to inherently handle high-dimensional feature spaces. 3. A considerable low computation cost with a simple classification stage. Performa