People with hearing impairment use sign language for communication. They use hand gestures to represent numbers, letters, words and sentences, which allows them to communicate among themselves. The problem arises when they need to interact with other people. An automation system that can convert sign language to text will make the interaction easier. Recently, many such systems for sign language recognition have been developed. But most of them were executed using laptop and computers, which are impractical to carry due to their weight and size. This article is based on the design and implementation of an Android application which converts the American Sign Language to text, so that it can be used anywhere and anytime. Image is captured by the smart phone camera and skin segmentation is done using YCbCr systems. Features are extracted from the image using HOG and classified to recognize the sign. The classification is done using Support Vector Machine (SVM).
Description
Android Based American Sign Language Recognition System with Skin Segmentation and SVM - IEEE Conference Publication
%0 Conference Paper
%1 lahoti2018android
%A Lahoti, Sakshi
%A Kayal, Shaily
%A Kumbhare, Sakshi
%A Suradkar, Ishani
%A Pawar, Vikul
%B 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT)
%C Aurangabad, Maharashtra, India
%D 2018
%I IEEE
%K android machine-learning pattern-recognition real sign-language support-vector-machine svm
%P 1-6
%R 10.1109/ICCCNT.2018.8493838
%T Android Based American Sign Language Recognition System with Skin Segmentation and SVM
%U https://ieeexplore.ieee.org/abstract/document/8493838
%X People with hearing impairment use sign language for communication. They use hand gestures to represent numbers, letters, words and sentences, which allows them to communicate among themselves. The problem arises when they need to interact with other people. An automation system that can convert sign language to text will make the interaction easier. Recently, many such systems for sign language recognition have been developed. But most of them were executed using laptop and computers, which are impractical to carry due to their weight and size. This article is based on the design and implementation of an Android application which converts the American Sign Language to text, so that it can be used anywhere and anytime. Image is captured by the smart phone camera and skin segmentation is done using YCbCr systems. Features are extracted from the image using HOG and classified to recognize the sign. The classification is done using Support Vector Machine (SVM).
@inproceedings{lahoti2018android,
abstract = {People with hearing impairment use sign language for communication. They use hand gestures to represent numbers, letters, words and sentences, which allows them to communicate among themselves. The problem arises when they need to interact with other people. An automation system that can convert sign language to text will make the interaction easier. Recently, many such systems for sign language recognition have been developed. But most of them were executed using laptop and computers, which are impractical to carry due to their weight and size. This article is based on the design and implementation of an Android application which converts the American Sign Language to text, so that it can be used anywhere and anytime. Image is captured by the smart phone camera and skin segmentation is done using YCbCr systems. Features are extracted from the image using HOG and classified to recognize the sign. The classification is done using Support Vector Machine (SVM).},
added-at = {2019-11-14T06:13:40.000+0100},
address = {Aurangabad, Maharashtra, India},
author = {Lahoti, Sakshi and Kayal, Shaily and Kumbhare, Sakshi and Suradkar, Ishani and Pawar, Vikul},
biburl = {https://www.bibsonomy.org/bibtex/291e92cc3a119a1a6a37e368482a111a0/jpmor},
booktitle = {2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT)},
description = {Android Based American Sign Language Recognition System with Skin Segmentation and SVM - IEEE Conference Publication},
doi = {10.1109/ICCCNT.2018.8493838},
eventtitle = {International Conference on Computing, Communication and Networking Technologies},
interhash = {b96e6dffd6e02ef26d19af37427b6bcd},
intrahash = {91e92cc3a119a1a6a37e368482a111a0},
keywords = {android machine-learning pattern-recognition real sign-language support-vector-machine svm},
language = {English},
month = {July},
pages = {1-6},
publisher = {IEEE},
school = {Department of Computer Science and Engineering, Government College of Engineering (GCE)},
timestamp = {2020-10-07T13:36:50.000+0200},
title = {Android Based American Sign Language Recognition System with Skin Segmentation and SVM},
url = {https://ieeexplore.ieee.org/abstract/document/8493838},
year = 2018
}