Author of the publication

1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed.

, , , , , , , , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 10118-10129. PMLR, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Exploiting Hardware Multicast and GPUDirect RDMA for Efficient Broadcast., , , , , and . IEEE Trans. Parallel Distributed Syst., 30 (3): 575-588 (2019)Optimized Broadcast for Deep Learning Workloads on Dense-GPU InfiniBand Clusters: MPI or NCCL?, , , and . CoRR, (2017)A Novel Tensor-Expert Hybrid Parallelism Approach to Scale Mixture-of-Experts Training., , , , , and . CoRR, (2023)1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed., , , , and . HIPC, page 272-281. IEEE, (2022)OC-DNN: Exploiting Advanced Unified Memory Capabilities in CUDA 9 and Volta GPUs for Out-of-Core DNN Training., , , , and . HiPC, page 143-152. IEEE, (2018)Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning., , , , , , and . ICPP, page 161-170. IEEE Computer Society, (2017)Intercloud message exchange middleware., , , and . ICUIMC, page 79:1-79:7. ACM, (2012)An In-depth Performance Characterization of CPU- and GPU-based DNN Training on Modern Architectures., , and . MLHPC@SC, page 8:1-8:8. ACM, (2017)DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales., , , , , , , , , and 9 other author(s). CoRR, (2023)Communication Profiling and Characterization of Deep-Learning Workloads on Clusters With High-Performance Interconnects., , , , and . IEEE Micro, 40 (1): 35-43 (2020)