We show that, during inference with Convolutional Neural Networks (CNNs),
more than 2x to $8x ineffectual work can be exposed if instead of targeting
those weights and activations that are zero, we target different combinations
of value stream properties. We demonstrate a practical application with
Bit-Tactical (TCL), a hardware accelerator which exploits weight sparsity, per
layer precision variability and dynamic fine-grain precision reduction for
activations, and optionally the naturally occurring sparse effectual bit
content of activations to improve performance and energy efficiency. TCL
benefits both sparse and dense CNNs, natively supports both convolutional and
fully-connected layers, and exploits properties of all activations to reduce
storage, communication, and computation demands. While TCL does not require
changes to the CNN to deliver benefits, it does reward any technique that would
amplify any of the aforementioned weight and activation value properties.
Compared to an equivalent data-parallel accelerator for dense CNNs, TCLp, a
variant of TCL improves performance by 5.05x and is 2.98x more energy efficient
while requiring 22% more area.