American Sign Language Character Recognition Using Convolution Neural Network
Communication is an important part of our lives. Deaf and dumb people being unable to speak and listen, experience a lot of problems while communicating with normal people. There are many ways by which people with these disabilities try to communicate. On
- PDF / 404,828 Bytes
- 10 Pages / 439.37 x 666.142 pts Page_size
- 14 Downloads / 328 Views
Abstract Communication is an important part of our lives. Deaf and dumb people being unable to speak and listen, experience a lot of problems while communicating with normal people. There are many ways by which people with these disabilities try to communicate. One of the most prominent ways is the use of sign language, i.e. hand gestures. It is necessary to develop an application for recognizing gestures and actions of sign language so that deaf and dumb people can communicate easily with even those who don’t understand sign language. The objective of this work is to take an elementary step in breaking the barrier in communication between the normal people and deaf and dumb people with the help of sign language. The image dataset in this work consists of 2524 ASL gestures which were used as input for the pre-trained VGG16 model. VGG16 is a vision model developed by the Vision Geometry Group from oxford. The accuracy of the model obtained using the Convolution Neural Network was about 96%.
1 Introduction Unlike communication through speech which uses sound to express one’s thoughts, a sign language uses facial expressions and movement of lips, hand movements and gestures, alignment and positioning of the hands and body. Similar with spoken dialects, sign-based languages vary from one area to another like ISL—Indian Sign Language, BSL––British Sign Language and ASL––American Sign Language. Being a vision-based language, it can be categorized into three types which are as follows: use of fingers to spell each alphabet of the word called, sign vocabulary for S. Masood (&) H.C. Thuwal A. Srivastava Department of Computer Engineering, Jamia Millia Islamia, New Delhi 110025, India e-mail: [email protected] H.C. Thuwal e-mail: [email protected] A. Srivastava e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2018 S.C. Satapathy et al. (eds.), Smart Computing and Informatics, Smart Innovation, Systems and Technologies 78, https://doi.org/10.1007/978-981-10-5547-8_42
403
404
S. Masood et al.
Fig. 1 ASL 36 hand gestures [12]
words (used for most of the communication), use of mouth and lip movements, facial expressions and hand and body positions. American Sign Language is prominently used by the deaf population of the United States of America along with some parts of Canada. It is an advanced and completely standardized language that uses both the shape of the hand gesture and its position and movement in the three-dimensional space. It is the primary mode of communication between the people who are deaf and their relatives. Fundamentally, two methodologies are used for recognizing hand gestures: one that is based on sight, i.e. vision and another one that is based on sensory data measured by using gloves. The primary objective of this work is to create a system based on vision to identify finger-spelled letters of ASL. The fact that the vision-based system offers a cleaner and innate mean of interaction and communication between a human and computer is the primary reason we chose it. In this work, 36
Data Loading...