Key Points

- Quantization of weight matrices of large language models (LLMs) to extremely low bit-width (1-bit) has been explored in this research paper, aiming to reduce storage and computational overheads.

- The OneBit method is introduced, which includes a 1-bit quantization-aware training (QAT) framework with a novel 1-bit parameter representation method and an effective parameter initialization method based on matrix decomposition.

- Experimental results indicate that OneBit achieves good performance (at least 83% of the non-quantized performance) with robust training processes when only using 1-bit weight matrices.

- The computational intensity and memory requirements of LLMs, especially transformer-based LLMs, have presented significant challenges, necessitating the exploration of methods to reduce these overheads while preserving most of their original model capabilities.

- Existing quantization methods suffer performance degradation when the quantization bit-width is extremely low, which has led to the exploration of Quantization-Aware Training (QAT) to overcome such limitations.

- The novel Linear layer and Sign-Value-Independent Decomposition (SVID) are proposed for weight matrices to represent LLMs using approximately 1-bit values, addressing the challenges of 1-bit weight quantization and providing an effective starting point for further training.

- Quantization-aware knowledge distillation is employed to transfer knowledge from the original model to the quantized one, leveraging cross-entropy and mean-square-error-based guidance for the student model.

- Extensive experiments demonstrate the superiority of the OneBit method, achieving a good trade-off between space occupancy and model performance, and outperforming representative strong baselines on different models.

- The study affirms the practical usability of the OneBit method in reducing the memory footprint of LLMs and emphasizes the potential for efficient deployment of LLMs on devices with limited resources.

Summary

This research paper introduces a 1-bit quantization-aware training framework called OneBit, which aims to quantize the weight matrices of large language models (LLMs) to extremely low bit-widths. The paper boldly quantizes LLM weight matrices to 1-bit, introducing a novel 1-bit quantization-aware training (QAT) framework with a 1-bit parameter representation method and an effective parameter initialization method based on matrix decomposition.

Experimental results demonstrate that OneBit achieves good performance with robust training processes, at least 83% of the non-quantized performance, when using 1-bit weight matrices. The framework addresses the computational and memory overheads of deploying LLMs, making them suitable for mid-to-high-end GPUs and mobile devices.

The paper contextualizes the research by highlighting the computational intensity and memory requirements of transformer-based LLMs, as well as the challenges associated with traditional quantization methods when the bit-width is extremely reduced. It discusses the limitations of existing post-training quantization (PTQ) methods and quantization-aware training (QAT) methods when quantizing models to 1-bit, emphasizing the drastic precision loss at extremely low bit-width in weight matrices.

The novelty of the OneBit framework lies in its approach to decomposing the weight matrices of LLMs into one sign matrix and two value vectors to represent LLMs using approximately 1-bit values. The proposed Linear layer and Sign-Value-Independent Decomposition (SVID) for weight matrices, as well as the quantization-aware knowledge distillation, contribute to the improved performance and convergence speed of the 1-bit model.

The paper outlines the experimental setup, training details, and model evaluations, demonstrating the efficiency of the OneBit framework in achieving a good trade-off between model size and performance. Additionally, it discusses the practical usability of OneBit in reducing the memory footprint of LLMs and enabling efficient deployment on various devices.

The study concludes by acknowledging the limitations of the proposed method and its potential for future research, while emphasizing the ethical use of publicly available and open-source models for academic and research-based purposes.

Reference: https://arxiv.org/abs/2402.112...