Abstract
This paper proposes evaluating three convolutional neural network architectures for recognizing hand configurations of the Brazilian Sign Language (Libras). To improve the generalization of neural networks, two techniques were employed: dropout and L2 regularization. A proprietary database consisting of 12.200 depth images, captured with the Kinect® sensor was used. Two hundred images were captured for each one of 61 Hand Configurations (HC) of Libras. The training and testing subsets were compounded using an interleave technique. An accuracy of 98% was achieved. This value is better than previous results obtained, with the same dataset, using the k-Nearest Neighbor (kNN) and Novelty classifiers, 95.41% and 96.31%, respectively.
Users
Please
log in to take part in the discussion (add own reviews or comments).