Inproceedings,

A mobile application of American sign language translation via image processing algorithms

, , and .
2016 IEEE Region 10 Symposium (TENSYMP), page 104-109. IEEE, (May 2016)
DOI: 10.1109/TENCONSpring.2016.7519386

Abstract

Due to the relative lack of pervasive sign language usage within our society, deaf and other verbally-challenged people tend to face difficulty in communicating on a daily basis. Our study thus aims to provide research into a sign language translator applied on the smartphone platform, due to its portability and ease of use. In this paper, a novel framework comprising established image processing techniques is proposed to recognise images of several sign language gestures. More specifically, we initially implement Canny edge detection and seeded region growing to segment the hand gesture from its background. Feature points are then extracted with Speeded Up Robust Features (SURF) algorithm, whose features are derived through Bag of Features (BoF). Support Vector Machine (SVM) is subsequently applied to classify our gesture image dataset; where the trained dataset is used to recognize future sign language gesture inputs. The proposed framework has been successfully implemented on smartphone platforms, and experimental results show that it is able to recognize and translate 16 different American Sign Language gestures with an overall accuracy of 97.13%.

Tags

Users

  • @jpmor

Comments and Reviews