Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers
- PDF / 1,641,293 Bytes
- 10 Pages / 600 x 792 pts Page_size
- 12 Downloads / 194 Views
Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers Khaled Assaleh Electrical Engineering Department, American University of Sharjah, P.O. Box 26666, Sharjah, UAE Email: [email protected]
M. Al-Rousan Computer Engineering Department, Jordan University of Science and Technology, Irbid, Jordan Email: [email protected] Received 29 December 2003; Revised 31 August 2004 Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data. Keywords and phrases: Arabic sign language, hand gestures, feature extraction, adaptive neuro-fuzzy inference systems, polynomial classifiers.
1.
INTRODUCTION
Signing has always been part of human communications. The use of gestures is not tied to ethnicity, age, or gender. Infants use gestures as a primary means of communication until their speech muscles are mature enough to articulate meaningful speech. For millennia, deaf people have created and used signs among themselves. These signs were the only form of communication available for many deaf people. Within the variety of cultures of deaf people all over the world, signing evolved to form complete and sophisticated languages. These languages have been learned and elaborated by succeeding generations of deaf children. Normally, there is no problem when two deaf persons communicate using their common sign language. The real difficulties arise when a deaf person wants to communicate with a nondeaf person. Usually both will get frustrated in a very short time. For this reason, there have been several attempts to design smart devices that can work as interpreters between the deaf people and others. These devices are categorized as human-computer-interaction (HCI) systems. Existing HCI devices for hand gesture recognition fall into two categories: glove-based and vision-based systems. The glove-
based system relies on electromechanical devices that are used for data collection about the gestures [1, 2, 3, 4, 5]. Here the person must wear some sort of wired gloves that are interfaced with
Data Loading...