Author of the publication

VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding.

, , , , , , , and . EMNLP (1), page 6787-6800. Association for Computational Linguistics, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

CiT: Curation in Training for Effective Vision-Language Data., , , , , , , and . ICCV, page 15134-15143. IEEE, (2023)Pre-training via Paraphrasing., , , , , and . NeurIPS, (2020)VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding., , , , , , , and . EMNLP (1), page 6787-6800. Association for Computational Linguistics, (2021)Multi-Task Retrieval for Knowledge-Intensive Tasks., , , , , , and . ACL/IJCNLP (1), page 1098-1111. Association for Computational Linguistics, (2021)MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts., , , , , , , and . CoRR, (2024)Flap: Fast Language-Audio Pre-Training., , , , and . ASRU, page 1-8. IEEE, (2023)ALERT: Adapt Language Models to Reasoning Tasks., , , , , , , , and . ACL (1), page 1055-1081. Association for Computational Linguistics, (2023)HTLM: Hyper-Text Pre-Training and Prompting of Language Models., , , , , , and . ICLR, OpenReview.net, (2022)VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding., , , , , , , and . ACL/IJCNLP (Findings), volume ACL/IJCNLP 2021 of Findings of ACL, page 4227-4239. Association for Computational Linguistics, (2021)Demystifying CLIP Data., , , , , , , , , and . ICLR, OpenReview.net, (2024)