Author of the publication

SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing.

, , , , , , , , , , , , , and . ACL (1), page 5723-5738. Association for Computational Linguistics, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing., , , , , , , , , and 3 other author(s). CoRR, (2021)The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline Shared Task., , , , , and . CoRR, (2022)Token2vec: A Joint Self-Supervised Pre-Training Framework Using Unpaired Speech and Text., , , and . ICASSP, page 1-5. IEEE, (2023)SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training., , , , , , and . EMNLP, page 1663-1676. Association for Computational Linguistics, (2022)CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning., , , , and . CoRR, (2022)Improving Attention-based End-to-end ASR by Incorporating an N-gram Neural Network., and . ISCSLP, page 1-5. IEEE, (2021)SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing., , , , , , , , , and 4 other author(s). ACL (1), page 5723-5738. Association for Computational Linguistics, (2022)Multi-View Self-Attention Based Transformer for Speaker Recognition., , , , , , , and . ICASSP, page 6732-6736. IEEE, (2022)The YiTrans Speech Translation System for IWSLT 2022 Offline Shared Task., and . IWSLT@ACL, page 158-168. Association for Computational Linguistics, (2022)Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data., , , , , , , , , and . INTERSPEECH, page 2658-2662. ISCA, (2022)