Key Points

1. Multilingual Large Language Models (MLLMs) are capable of handling and responding to queries in multiple languages, achieving remarkable success in multilingual natural language processing tasks.

2. The current progress in MLLMs has primarily focused on English-centric models, making them somewhat weak for multilingual settings, especially in low-resource scenarios.

3. MLLMs have the advantage of comprehensively handling multiple languages, leading to increasing attention. The existing MLLMs can be broadly divided into two groups based on different stages, offering a new taxonomy to summarize the current progress of MLLMs.

4. Parameter-tuning alignment and parameter-frozen alignment are two main methodologies for achieving alignment in MLLMs.

5. There are challenges related to multilingual hallucination, inconsistent knowledge, safety risks, fairness, inclusion of low-resource languages, and integration with other modalities in MLLMs.

6. The research discusses various prompting strategies such as direct prompting, code-switching prompting, translation alignment prompting, and retrieval-augmented alignment for alignment without parameter tuning.

7. The text outlines the challenges and opportunities in the field of multilingual large language models, spanning safety, fairness, knowledge consistency, languages inclusion, and integration with other modalities.

8. The paper ends with a comprehensive overview emphasizing the advancements in multilingual large language models and how this work can facilitate research and inspire further breakthroughs.

9. The takeaway from this survey is the need for continuous improvement and exploration in the field of multilingual large language models to address the highlighted challenges and advance the capabilities of these models.

Reference: https://arxiv.org/abs/2404.049...