Author of the publication

SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity.

, , , and . CoRR, (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Tight Compression: Compressing CNN Model Tightly Through Unstructured Pruning and Simulated Annealing Based Permutation., , , and . DAC, page 1-6. IEEE, (2020)TAC-RAM: A 65nm 4Kb SRAM Computing-in-Memory Design with 57.55 TOPS/W supporting Multibit Matrix-Vector Multiplication for Binarized Neural Network., , , , , , , , and . AICAS, page 66-69. IEEE, (2022)SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity., , , and . CoRR, (2017)Accelerating Large Kernel Convolutions with Nested Winograd Transformation., , and . VLSI-SoC, page 1-6. IEEE, (2023)Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation., , , and . IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 42 (2): 644-657 (February 2023)Model Predictive Control for Stand-alone Half-bridge Inverter., , and . GCCE, page 1183-1187. IEEE, (2023)A high-throughput and energy-efficient RRAM-based convolutional neural network using data encoding and dynamic quantization., , , and . ASP-DAC, page 123-128. IEEE, (2018)CompRRAE: RRAM-based convolutional neural network accelerator with reduced computations through a runtime activation estimation., , , and . ASP-DAC, page 133-139. ACM, (2019)SparseNN: An energy-efficient neural network accelerator exploiting input and output sparsity., , , and . DATE, page 241-244. IEEE, (2018)Late Breaking Results: Weight Decay is ALL You Need for Neural Network Sparsification., , , , and . DAC, page 1-2. IEEE, (2023)