@dblp

Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers.

, , , , , , , and . ACL (1), page 12933-12950. Association for Computational Linguistics, (2023)

Links and resources

Tags