From post

FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations.

, , , , , и . IEEE Trans. Neural Networks Learn. Syst., 35 (2): 2640-2654 (февраля 2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Dynamic and Static Context-Aware LSTM for Multi-agent Motion Prediction., , , и . ECCV (21), том 12366 из Lecture Notes in Computer Science, стр. 547-563. Springer, (2020)Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies., , , , , , , и . CoRR, (2024)D2O: Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models., , , , , , , , , и . CoRR, (2024)DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference., , , , , , , , , и . CoRR, (2023)LiteGT: Efficient and Lightweight Graph Transformers., , и . CIKM, стр. 161-170. ACM, (2021)BATMANN: A Binarized-All-Through Memory-Augmented Neural Network for Efficient In-Memory Computing., , , , , , , и . ASICON, стр. 1-4. IEEE, (2021)FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations., , , , , и . IEEE Trans. Neural Networks Learn. Syst., 35 (2): 2640-2654 (февраля 2024)Structured Pruning for Efficient Generative Pre-trained Language Models., , , , , , , и . ACL (Findings), стр. 10880-10895. Association for Computational Linguistics, (2023)UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers., , , , , и . ICML, том 202 из Proceedings of Machine Learning Research, стр. 31292-31311. PMLR, (2023)FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation., , , , , и . CoRR, (2021)