Author of the publication

DistiLLM: Towards Streamlined Distillation for Large Language Models.

, , , and . ICML, OpenReview.net, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

DistiLLM: Towards Streamlined Distillation for Large Language Models., , , and . ICML, OpenReview.net, (2024)DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models., , , , and . CoRR, (2023)Real-time and Explainable Detection of Epidemics with Global News Data., , , , and . Healthcare AI and COVID-19 Workshop, volume 184 of Proceedings of Machine Learning Research, page 73-90. PMLR, (2022)Self-Contrastive Learning., , , , , and . CoRR, (2021)ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning., , , , , and . CIKM, page 4359-4363. ACM, (2022)Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning., , and . CVPR, page 7537-7547. IEEE, (2023)Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation., , , and . INTERSPEECH, page 316-320. ISCA, (2023)Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition., , , , and . CoRR, (2024)Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch., and . IEEE Access, (2022)Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network., , , , , and . AAAI, page 197-205. AAAI Press, (2023)