Abstract
Quantization methods reduce the number of bits required to represent each
parameter in a model, trading accuracy for smaller memory footprints and
inference latencies. However, the final model size depends on both the number
of parameters of the original model and the rate of compression. For example, a
30B 8-bit model and a 60B 4-bit model have the same number of bits but may have
very different zero-shot accuracies. In this work, we study this trade-off by
developing inference scaling laws of zero-shot performance in Large Language
Models (LLMs) to determine the bit-precision and model size that maximizes
zero-shot performance. We run more than 35,000 experiments with 16-bit inputs
and k-bit parameters to examine which zero-shot quantization methods improve
scaling for 3 to 8-bit precision at scales of 19M to 176B parameters across the
LLM families BLOOM, OPT, NeoX/Pythia, and GPT-2. We find that it is challenging
to improve the bit-level scaling trade-off, with the only improvements being
the use of a small block size -- splitting the parameters into small
independently quantized blocks -- and the quantization data type being used
(e.g., Int vs Float). Overall, our findings show that 4-bit precision is
almost universally optimal for total model bits and zero-shot accuracy.
Users
Please
log in to take part in the discussion (add own reviews or comments).