Author of the publication

DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters.

, , , and . KDD, page 3505-3506. ACM, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Scalable and Efficient MoE Training for Multitask Multilingual Models., , , , , , , , and . CoRR, (2021)Fast LSTM Inference by Dynamic Decomposition on Cloud Systems., , , , , , and . ICDM, page 748-757. IEEE, (2019)ZeRO-Offload: Democratizing Billion-Scale Model Training., , , , , , , and . USENIX Annual Technical Conference, page 551-564. USENIX Association, (2021)SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement., , , , , and . NeurIPS, page 20531-20544. (2021)Optimizing the Four-Index Integral Transform Using Data Movement Lower Bounds Analysis., , , , and . PPoPP, page 327-340. ACM, (2017)DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models., , , , , , and . CoRR, (2023)DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales., , , , , , , , , and 9 other author(s). CoRR, (2023)DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale., , , , , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 18332-18346. PMLR, (2022)A Communication-Optimal Framework for Contracting Distributed Tensors., , , , , and . SC, page 375-386. IEEE Computer Society, (2014)International Conference on Computational Science, ICCS 2012., , , , , and . ICCS, volume 9 of Procedia Computer Science, page 412-421. Elsevier, (2012)