Industry Expert Blogs
Neural Network Model quantization on mobilearm Blogs - Roberto Lopez Mendez, ArmOct. 20, 2023 |
The general definition of quantization states that it is the process of mapping continuous infinite values to a smaller set of discrete finite values. In this blog, we will talk about quantization in the context of Neural Network (NN) models, as the process of reducing the precision of the weights, biases, and activations. Moving from floating-point representations to low-precision fixed integer values holds the potential of substantially reducing the memory footprint and latency. This is crucial for deploying models on mobile devices and edge platforms, where runtime computational resources are restricted. There is also an increased focus on quantization’s importance due to the latest developments in generative and Large Language Models (LLM), and the need to bring them to mobile space.
This blog intends to provide a picture of the current state of quantization on mobile (Android) and the opportunities it opens to bring inference of complex NN models to the edge. The first section provides an overview of existing quantization methods and classifications. The second section discusses and compares the main two quantization approaches in TensorFlow Lite (TFLite): Post-Training Quantization (PTQ) and Quantization Aware Training (QAT). Due to the increasing importance of LLMs and generative models, the last section is devoted to some of the challenges of Transformers models, where mixed-precision quantization is the preferred approach.