Author of the publication

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications.

, , , and . FPL, page 291-297. IEEE, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications., , , and . FPL, page 291-297. IEEE, (2020)BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing., , and . FPL, page 307-314. IEEE Computer Society, (2018)Towards efficient quantized neural network inference on mobile devices: work-in-progress., and . CASES, page 18:1-18:2. ACM, (2017)Hybrid breadth-first search on a single-chip FPGA-CPU heterogeneous platform., , and . FPL, page 1-8. IEEE, (2015)An energy efficient column-major backend for FPGA SpMV accelerators., and . ICCD, page 432-439. IEEE Computer Society, (2014)EcoFlow: Efficient Convolutional Dataflows on Low-Power Neural Network Accelerators., , , , , , , and . IEEE Trans. Computers, 73 (9): 2275-2289 (September 2024)Scaling Binarized Neural Networks on Reconfigurable Logic., , , , , , and . PARMA-DITAM@HiPEAC, page 25-30. ACM, (2017)A Vector Caching Scheme for Streaming FPGA SpMV Accelerators., and . ARC, volume 9040 of Lecture Notes in Computer Science, page 15-26. Springer, (2015)High-Throughput DNN Inference with LogicNets., , , and . FCCM, page 238. IEEE, (2020)A2Q+: Improving Accumulator-Aware Weight Quantization., , , and . ICML, OpenReview.net, (2024)