Author of the publication

Spectral warping based data augmentation for low resource children's speaker verification.

, , , and . Multim. Tools Appl., 83 (16): 48895-48906 (May 2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

No persons found for author name Kadyan, Virender
add a person with the name Kadyan, Virender
 

Other publications of authors with the same name

In domain training data augmentation on noise robust Punjabi Children speech recognition., , and . J. Ambient Intell. Humaniz. Comput., 13 (5): 2705-2721 (2022)Sentiment classification of movie reviews using GA and NeuroGA., , and . Multim. Tools Appl., 82 (6): 7991-8011 (March 2023)Prosody features based low resource Punjabi children ASR and T-NT classifier using data augmentation., , and . Multim. Tools Appl., 82 (3): 3973-3994 (2023)Training augmentation with TANDEM acoustic modelling in Punjabi adult speech recognition system., , and . Int. J. Speech Technol., 24 (2): 473-481 (2021)Speech-Based Alzheimer's Disease Classification System with Noise-Resilient Features Optimization., , , and . AICS, page 1-4. IEEE, (2023)Improved filter bank on multitaper framework for robust Punjabi-ASR system., , and . Int. J. Speech Technol., 23 (1): 87-100 (2020)A comparison of Laryngeal effect in the dialects of Punjabi language., , and . J. Ambient Intell. Humaniz. Comput., 13 (5): 2415-2428 (2022)Transfer learning through perturbation-based in-domain spectrogram augmentation for adult speech recognition., and . Neural Comput. Appl., 34 (23): 21015-21033 (2022)ASRoIL: a comprehensive survey for automatic speech recognition of Indian languages., , , and . Artif. Intell. Rev., 53 (5): 3673-3704 (2020)Enhancing accuracy of long contextual dependencies for Punjabi speech recognition system using deep LSTM., , and . Int. J. Speech Technol., 24 (2): 517-527 (2021)