From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Joint Task Offloading, CNN Layer Scheduling and Resource Allocation in Cooperative Computing System., , и . ChinaCom (1), том 312 из Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, стр. 129-142. Springer, (2019)XLM-E: Cross-lingual Language Model Pre-training via ELECTRA., , , , , , , и . CoRR, (2021)Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point., , , , , , , , , и 14 other автор(ы). NeurIPS, (2020)COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining., , , , , , и . NeurIPS, стр. 23102-23114. (2021)Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention., , , , , и . ICLR, OpenReview.net, (2020)MS MARCO: A Human Generated MAchine Reading COmprehension Dataset., , , , , , и . CoCo@NIPS, том 1773 из CEUR Workshop Proceedings, CEUR-WS.org, (2016)Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers., , , , , , , и . ACL (1), стр. 12933-12950. Association for Computational Linguistics, (2023)Language Is Not All You Need: Aligning Perception with Language Models., , , , , , , , , и 8 other автор(ы). NeurIPS, (2023)MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection., , , , , и . CoRR, (2024)MS MARCO: A Human Generated MAchine Reading COmprehension Dataset, , , , , , , , , и 5 other автор(ы). (2016)cite arxiv:1611.09268.