Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training., , , and . ASPLOS, page 401-416. ACM, (2020)ASPLOS 2020 was canceled because of COVID-19..Critique of "Planetary Normal Mode Computation: Parallel Algorithms, Performance, and Reproducibility" by SCC Team From Tsinghua University., , , , , , , and . IEEE Trans. Parallel Distributed Syst., 32 (11): 2631-2634 (2021)FastDecode: High-Throughput GPU-Efficient LLM Serving using Heterogeneous Pipelines., and . CoRR, (2024)Student Cluster Competition 2018, Team Tsinghua University: Reproducing performance of multi-physics simulations of the Tsunamigenic 2004 Sumatra megathrust earthquake on the Intel Skylake Architecture., , , , , , , , and . Parallel Comput., (2019)FastMoE: A Fast Mixture-of-Expert Training System., , , , , and . CoRR, (2021)Efficiently emulating high-bitwidth computation with low-bitwidth hardware., , , , , , , and . ICS, page 5:1-5:12. ACM, (2022)FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models., , , , , , and . PPoPP, page 120-134. ACM, (2022)Heterogeneity-Aware Asynchronous Decentralized Training., , , and . CoRR, (2019)BaGuaLu: targeting brain scale pretrained models with over 37 million cores., , , , , , , , , and 15 other author(s). PPoPP, page 192-204. ACM, (2022)SmartMoE: Efficiently Training Sparsely-Activated Models through Combining Offline and Online Parallelization., , , , , and . USENIX Annual Technical Conference, page 961-975. USENIX Association, (2023)