Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Scaling Instruction-Finetuned Language Models., , , , , , , , , and 21 other author(s). CoRR, (2022)Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers., , , , , , , , , and . CoRR, (2021)Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning., , , , , , , and . ICLR (Poster), OpenReview.net, (2018)Do Transformer Modifications Transfer Across Implementations and Applications?, , , , , , , , , and 6 other author(s). EMNLP (1), page 5758-5773. Association for Computational Linguistics, (2021)Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer., , , , , , , , and . J. Mach. Learn. Res., (2020)Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer., , , , , , , , and . CoRR, (2019)PaLM: Scaling Language Modeling with Pathways., , , , , , , , , and 57 other author(s). CoRR, (2022)FCM: Forgetful Causal Masking Makes Causal Language Models Better Zero-Shot Learners., , , , , , and . CoRR, (2022)Character-Aware Models Improve Visual Text Rendering., , , , , , , , , and . ACL (1), page 16270-16297. Association for Computational Linguistics, (2023)UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining., , , , , , and . ICLR, OpenReview.net, (2023)