Author of the publication

Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model.

, , , , and . EMNLP, page 15038-15061. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

8-bit Optimizers via Block-wise Quantization., , , and . ICLR, OpenReview.net, (2022)8-Bit Approximations for Parallelism in Deep Learning.. ICLR (Poster), (2016)LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale., , , and . CoRR, (2022)Training Transformers Together., , , , , , , and . NeurIPS (Competition and Demos), volume 176 of Proceedings of Machine Learning Research, page 335-342. PMLR, (2021)Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model., , , , and . EMNLP, page 15038-15061. Association for Computational Linguistics, (2023)Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models., , , , , , and . CoRR, (2022)Distributed Inference and Fine-tuning of Large Language Models Over The Internet., , , , , , , and . CoRR, (2023)8-bit Optimizers via Block-wise Quantization., , , and . CoRR, (2021)SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression., , , , , , , , and . CoRR, (2023)Training Transformers Together., , , , , , , and . CoRR, (2022)