Key Points

1. Decompilation is the process of converting machine-level code back into human-readable source code, and traditional methods often struggle with recreating details like variable names and program structure.

2. Large language models (LLMs) have shown promise for programming tasks and the decompilation process, but there is a lack of open-source LLMs for decompilation, as well as standardized benchmarks for evaluating decompilation techniques.

3. The paper introduces LLM4Decompile, the first open-access decompilation LLM ranging from 1B to 33B pre-trained on 4 billion tokens of C source code and the corresponding assembly code. These LLMs can serve as baselines for further development in the field.

4. The authors also introduce Decompile-Eval, the first dataset that considers re-compilability and re-executability for decompilation, emphasizing the importance of evaluating the decompilation model from the perspective of program semantics.

5. LLM4Decompile has demonstrated the capability to accurately decompile 21% of the assembly code, achieving a 50% improvement over GPT-4, and the models, code, dataset are released publicly.

6. The paper discusses the limitations of traditional decompilation methods and the potential of LLMs, specifically highlighting previous efforts utilizing recurrent neural networks and recent advancements in natural language processing.

7. The researchers focus on pre-training objectives and data for LLM4Decompile, using 4 billion tokens of C source code from AnghaBench and discussing different model configurations and training objectives.

8. A notable gap in the field of decompilation is identified - the lack of a unified, accepted benchmark for measuring the quality of decompilation tools - leading the authors to develop the first open-source LLM tailored for decompilation and establish the first benchmark for re-compilability and re-executability to set a standard for performance evaluation.

9. Evaluation results on both Decompile-Eval and AnghaBench show promising capabilities of LLM4Decompile, achieving around 90% re-compilability and 21% re-executability for the 6B model, indicating syntactic understanding and semantic preservation. Ablation studies on training methodologies also contribute to understanding model performance.

Summary

The research paper "LLM4Decompile: Decompiling Binary Code with Large Language Models" addresses the challenges of decompiling compiled code into human-readable source code. The paper highlights the potential of large language models (LLMs) for decompilation and the lack of open-source LLM specifically designed for decompilation. To address this gap, the authors introduce the first open-access decompilation LLMs pre-trained on C source code and the corresponding assembly code, ranging from 1B to 33B and released as open-source models, providing baselines for further development in the field.

Decompilation Dataset and Model Performance
The authors also introduce "Decompile-Eval," the first dataset emphasizing re-compilability and re-executability for decompilation, allowing practical program evaluation from the perspective of program semantics. The benchmark aims to evaluate the decompilation model based on program syntax and semantics. The LLM4Decompile is shown to accurately decompile 21% of the assembly code, indicating a 50% improvement over GPT-4. The code, dataset, and models are made publicly available to encourage further advancements in the field.

Challenges and Discussion
The paper emphasizes the challenges of decompilation, the promising role of LLMs in addressing these challenges, and the need for practical program evaluation focusing on both syntax and semantics. The findings demonstrate the capability of LLM4Decompile in accurately decompiling assembly code and highlight the importance of considering re-compilability and re-executability for evaluating decompilation models. The authors also discuss the limitations of the study, including the scope of C language and the simplification of decompilation to single functions, pointing to the potential for future research to address these aspects and expand the understanding of decompilation.

Overall, the paper presents crucial advancements in decompilation technology, providing open-access LLMs and a new benchmark for evaluating decompilation models based on program semantics, thereby paving the way for further developments in the field.

Reference: https://arxiv.org/abs/2403.05286v1