Abstract
Currently, the neural network architecture design is mostly guided by the
indirect metric of computation complexity, i.e., FLOPs. However, the
direct metric, e.g., speed, also depends on the other factors such as
memory access cost and platform characterics. Thus, this work proposes to
evaluate the direct metric on the target platform, beyond only considering
FLOPs. Based on a series of controlled experiments, this work derives several
practical guidelines for efficient network design. Accordingly, a new
architecture is presented, called ShuffleNet V2. Comprehensive ablation
experiments verify that our model is the state-of-the-art in terms of speed and
accuracy tradeoff.
Users
Please
log in to take part in the discussion (add own reviews or comments).