Key Points

1. Activation Beacon is proposed as a method to extend the context length of Large Language Models (LLMs) by condensing the LLM's raw activations into highly compact forms, allowing the LLM to perceive a longer context with a limited context window.

2. The proposed method condenses the raw activations of the LLM, retains the LLM's original capabilities in short contexts, and streamingly processes long contexts using a sliding window for both training and inference, achieving competitive memory and time efficiency.

3. Activation Beacon is trained with short-sequence data of diversified condensing ratios, making it effectively learn to support different context lengths with a small training cost.

4. The experiment verifies Activation Beacon's effectiveness of context extension, where it achieves remarkable high-quality extension of the context by up to 100 times.

5. Activation Beacon is compatible with existing LLMs and can work with context window extension methods, further extending the context length without the need for further fine-tuning.

6. When combined with retrieval techniques, Activation Beacon significantly improves the memory accuracy, achieving 100% accuracy at all context lengths in memory recall tasks such as passkey retrieval.

7. The condensing ratio of Activation Beacon can be flexibly configured at inference time, with lower condensing ratios leading to higher generation quality but accommodating shorter overall context lengths.

8. Activation Beacon can be further optimized to ensure that tokens in the beginning of any interval always attend to some raw activations as a local context to prevent the generation performance degradation, especially when answering user instructions.

9. The method is not only effective for context extension but also efficient and compatible with a wide range of tasks, making it a promising solution for extending the capabilities of LLMs in processing longer contexts.

Summary

Subtitle 1: Introduction to Activation Beacon
The research paper, authored by Peitian Zhang, Zheng Liu, and colleagues, proposes a method called Activation Beacon to extend the context length of large language models (LLMs) while maintaining efficiency and compatibility. The paper discusses the challenge of limited context window size for LLMs and the unfavorable impact of extending context through fine-tuning in terms of training and inference costs. Activation Beacon is introduced as a plug-in module that condenses LLM's raw activations into compact forms, enabling the model to perceive a longer context with a limited context window. The technique works with sliding windows for stream processing, ensuring competitive memory and time efficiency in both training and inference. Activation Beacon is trained with short-sequence data of diversified condensing ratios, allowing it to support different context lengths with a small training cost. Experimental results demonstrate the superior language modeling and long-context understanding tasks achieved by Activation Beacon, including a remarkable extension of Llama-2-7B's context by 100 times and superior performances across various long-context language modeling and understanding tasks.

Subtitle 2: Activation Beacon Applications and Extensions
The research also discusses how Activation Beacon can be combined with context window extension techniques to further extend the context length, such as Position Interpolation and NTK. It explores the potential of Activation Beacon in collaboration with retrieval techniques to accurately remember specific information buried in long contexts. Additionally, the paper investigates the impact of different condensing ratios on Activation Beacon's performance and discusses potential improvements for the sliding window method.

Subtitle 3: Implications and Future Work
The paper suggests that Activation Beacon realizes a dramatic extension of LLM's context based on the high-quality condensing of LLM's activations. It establishes long-context capabilities for LLMs while maintaining compatibility with existing capabilities and high running efficiency. The authors also consider potential future improvements, including adjusting the sliding window stride to allow for better utilization of raw context information.

Reference: https://arxiv.org/abs/2401.03462