Author of the publication

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability.

, , , and . CoRR, (2017)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Bit-pragmatic Deep Neural Network Computing., , , , and . CoRR, (2016)Bit-Pragmatic Deep Neural Network Computing., , , , and . ICLR (Workshop), OpenReview.net, (2017)Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks., , , and . CoRR, (2017)Bit-pragmatic deep neural network computing., , , , , , and . MICRO, page 382-394. ACM, (2017)Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How., , , , , , , and . CoRR, (2018)Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability., , , and . CoRR, (2017)An improved FPGA-based specific processor for Blokus Duo., , and . FPT, page 366-369. IEEE, (2014)DPRed: Making Typical Activation Values Matter In Deep Learning Computing., , , , and . CoRR, (2018)Identifying and Exploiting Ineffectual Computations to Enable Hardware Acceleration of Deep Learning., , , , , , , , , and 3 other author(s). NEWCAS, page 356-360. IEEE, (2018)Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks., , , and . CoRR, (2017)