Key Points

1. Introduction: The paper introduces a new era of 1-bit Large Language Models (LLMs) with BitNet b1.58, where every parameter of the LLM is ternary {-1, 0, 1}, showing promising results in terms of performance, latency, memory, throughput, and energy consumption.

2. Challenges and Solutions: The increasing size of LLMs has raised concerns about high energy consumption. Post-training quantization to create low-bit models for inference has been used but is suboptimal. BitNet b1.58 presents a cost-effective solution by significantly improving energy efficiency, memory consumption, and latency compared to full-precision LLMs.

3. Features of BitNet b1.58: BitNet b1.58 retains benefits of the original 1-bit BitNet while introducing an additional parameter value 0, resulting in 1.58 bits. It supports feature filtering, matching full precision LLMs in perplexity and end-task performance, and can outperform LLMs of similar sizes.

4. Architecture and Training: BitNet b1.58 is based on BitNet architecture, using quantization functions to constrain weights to {-1, 0, 1}. It incorporates LLaMA-alike components, making it compatible with open-source software. The performance was evaluated on various tasks and datasets.

5. Performance Comparison: BitNet b1.58 demonstrates superior performance, memory efficiency, and latency compared to FP16 LLM baselines across different model sizes, showing a Pareto improvement over existing models.

6. Scalability: BitNet b1.58 enables a new scaling law, showcasing efficiency gains concerning latency, memory usage, and energy consumption compared to larger FP16 LLMs.

7. Training with Tokens: Training BitNet b1.58 with 2T tokens showed superior performance on various tasks compared to StableLM-3B, highlighting the strong generalization capabilities of 1.58-bit LLMs.

8. Mixture-of-Experts (MoE): BitNet b1.58 addresses challenges faced by MoE models by reducing memory consumption and inter-chip communication overhead, making deployment more efficient.

9. Implications: BitNet b1.58 has potential applications in edge and mobile devices, improving performance and enabling new applications. The paper calls for designing new hardware optimized for 1-bit LLMs based on the new computation paradigm introduced by BitNet.

Summary

The research paper introduces a new 1-bit Large Language Model (LLM) variant called BitNet b1.58, which utilizes ternary {-1, 0, 1} parameters. The paper demonstrates that BitNet b1.58 matches the performance of full-precision Transformer LLMs while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. It also defines a new scaling law and recipe for training next-generation LLMs and enables a new computation paradigm.

The paper introduces modifications to the BitNet architecture, primarily using an absmean quantization function to achieve 1.58-bit weights and 8-bit activations, leading to improved performance in terms of model size and training tokens. BitNet b1.58 compares favorably to full precision LLM in terms of perplexity, end-task performance, memory, latency, throughput, and energy consumption, showcasing its cost-effectiveness and potential for hardware optimization.

Additionally, the paper demonstrates that BitNet b1.58 is suitable for deployment on edge and mobile devices due to its reduced memory and energy consumption, enabling new applications and improving device performance. The findings also highlight the potential for designing new hardware specifically optimized for 1-bit LLMs, given the new computation paradigm enabled by BitNet b1.58.

Reference: https://arxiv.org/abs/2402.177...