Misc,

Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages

, , , , , , , , , , , , , , , , , , , , , , , , , , and .
(2023)cite arxiv:2303.01037Comment: 20 pages, 7 figures, 8 tables.

Abstract

We introduce the Universal Speech Model (USM), a single large model that performs automatic speech recognition (ASR) across 100+ languages. This is achieved by pre-training the encoder of the model on a large unlabeled multilingual dataset of 12 million (M) hours spanning over 300 languages, and fine-tuning on a smaller labeled dataset. We use multilingual pre-training with random-projection quantization and speech-text modality matching to achieve state-of-the-art performance on downstream multilingual ASR and speech-to-text translation tasks. We also demonstrate that despite using a labeled training set 1/7-th the size of that used for the Whisper model, our model exhibits comparable or better performance on both in-domain and out-of-domain speech recognition tasks across many languages.

Tags

Users

  • @sivchand
  • @dblp

Comments and Reviews