Skip to content

Compute-Optimal Quantization-Aware Training Apple Machine Learning Research

​[[{“value”:”Quantization-aware training (QAT) is a leading technique for improving the accuracy of quantized neural networks. Previ-
ous work has shown that decomposing training into a full-precision (FP) phase followed by a QAT phase yields superior
accuracy compared to QAT alone. However, the optimal allocation of compute between the FP and QAT phases remains
unclear. We conduct extensive experiments with various compute budgets, QAT bit widths, and model sizes from 86.0M
to 2.2B to investigate how different QAT durations impact final performance. We demonstrate that, contrary to previous
findings, the…”}]] [[{“value”:”Quantization-aware training (QAT) is a leading technique for improving the accuracy of quantized neural networks. Previ-
ous work has shown that decomposing training into a full-precision (FP) phase followed by a QAT phase yields superior
accuracy compared to QAT alone. However, the optimal allocation of compute between the FP and QAT phases remains
unclear. We conduct extensive experiments with various compute budgets, QAT bit widths, and model sizes from 86.0M
to 2.2B to investigate how different QAT durations impact final performance. We demonstrate that, contrary to previous
findings, the…”}]]  Read More