Author of the publication

MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization.

, , , , , , , and . HPCA, page 124-138. IEEE, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Grasp State Assessment of Deformable Objects Using Visual-Tactile Fusion Perception., , , , and . ICRA, page 538-544. IEEE, (2020)Block convolution: Towards memory-efficient inference of large-scale CNNs on FPGA., , , and . DATE, page 1163-1166. IEEE, (2018)A System-Level Solution for Low-Power Object Detection., , , , , , , , , and 1 other author(s). ICCV Workshops, page 2461-2468. IEEE, (2019)LW-DETR: A Transformer Replacement to YOLO for Real-Time Detection., , , , , , , , , and 5 other author(s). CoRR, (2024)$A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks., , , , , , , and . ICLR, OpenReview.net, (2023)EBERT: Efficient BERT Inference with Dynamic Structured Pruning., , , and . ACL/IJCNLP (Findings), volume ACL/IJCNLP 2021 of Findings of ACL, page 4814-4823. Association for Computational Linguistics, (2021)Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA., , , and . CoRR, (2021)GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks., , , and . NeurIPS, (2022)A2Q: Aggregation-Aware Quantization for Graph Neural Networks., , , , , , , and . CoRR, (2023)MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization., , , , , , , and . CoRR, (2023)