Arabic and American Sign Languages Alphabet Recognition by Convolutional Neural Network
Shroog Alshomrani 1  
,   Lina Aljoudi 1  
,   Muhammad Arif 1  
More details
Hide details
Department of Computer Science, Umm Alqura University, Makkah, Saudi Arabia
Muhammad Arif   

Department of Computer Science, Umm Alqura University, Abdiyah Campus, Makkah, Saudi Arabia
Hearing loss is a common disability that occurs in many people worldwide. Hearing loss can be mild to complete deafness. Sign language is used to communicate with the deaf community. Sign language comprises hand gestures and facial expressions. However, people find it challenging to communicate in sign language as not all know sign language. Every country has developed its sign language like spoken languages, and there is no standard syntax and grammatical structure. The main objective of this research is to facilitate the communication between deaf people and the community around them. Since sign language contains gestures for words, sentences, and letters, this research implemented a system to automatically recognize the gestures and signs using imaging devices like cameras. Two types of sign languages are considered, namely, American sign language and Arabic sign language. We have used the convolutional neural network (CNN) to classify the images into signs. Different settings of CNN are tried for Arabic and American sign datasets. CNN-2 consisting of two hidden layers produced the best results (accuracy of 96.4%) for the Arabic sign language dataset. CNN-3, composed of three hidden layers, achieved an accuracy of 99.6% for the American sign dataset.