Author of the publication

Spoken Language Identification Based on I-vectors and Conditional Random Fields.

, , , , and . IWCMC, page 1443-1447. IEEE, (2018)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Cued Speech: A visual communication mode for the deaf society., and . IEICE Electron. Express, 7 (4): 234-239 (2010)Deep Learning-Based Automatic Pronunciation Assessment for Second Language Learners., , , and . HCI (39), volume 1225 of Communications in Computer and Information Science, page 338-342. Springer, (2020)Automatic Spoken Language Identification Using Emotional Speech., , , and . HCI (38), volume 1224 of Communications in Computer and Information Science, page 650-654. Springer, (2020)Lip Shape and Hand Position Fusion for Automatic Vowel Recognition in Cued Speech for French., , and . IEEE Signal Process. Lett., 16 (5): 339-342 (2009)Simultaneous recognition of multiple sound sources based on 3-d n-best search using microphone array., , , and . EUROSPEECH, page 69-72. ISCA, (1999)Investigating the role of the Lombard reflex in non-audible murmur (NAM) recognition., , , and . INTERSPEECH, page 2649-2652. ISCA, (2005)Automatic Method to Build a Dictionary for Class-Based Translation Systems., , , , , , and . CICLing (1), volume 13396 of Lecture Notes in Computer Science, page 289-298. Springer, (2018)A Study on Far-Field Emotion Recognition Based on Deep Convolutional Neural Networks., , , , , and . CICLing (2), volume 13397 of Lecture Notes in Computer Science, page 181-193. Springer, (2018)An Empirical Study on Feature Extraction in DNN-Based Speech Emotion Recognition., , , , , and . HCI (48), volume 1293 of Communications in Computer and Information Science, page 315-319. Springer, (2020)Non-audible murmur recognition based on fusion of audio and visual streams., and . INTERSPEECH, page 2706-2709. ISCA, (2010)