Summary

Researchers have introduced a new framework called Graph of Thoughts (GoT) that enhances prompting capabilities in large language models (LLMs). GoT models the information generated by an LLM as an arbitrary graph, where thoughts are represented as vertices and dependencies between thoughts are represented as edges. This allows for the combination of thoughts, the distillation of networks of thoughts, and the enhancement of thoughts through feedback loops. Compared to existing paradigms like Chain-of-Thought (CoT) and Tree of Thoughts (ToT), GoT offers advantages in tasks such as sorting, increasing the quality of sorting by 62% over ToT while reducing costs by over 31%. GoT is also extensible, allowing for the incorporation of new thought transformations and the use of different LLM models. The architecture of GoT consists of modules such as the Prompter, Parser, Scoring module, and Controller. These modules work together to generate prompts, extract information from LLM replies, evaluate thoughts, and coordinate the reasoning process. GoT is compared to other prompting schemes such as Input-Output (IO), CoT, and ToT, and it consistently outperforms these schemes in terms of quality and cost. Overall, GoT allows for more powerful and flexible prompting in LLMs, bringing the reasoning capabilities closer to human thinking processes.

Key points

1. Graph of Thoughts (GoT) is a new framework that enhances the prompting capabilities of large language models (LLMs).
2. GoT models the information generated by an LLM as an arbitrary graph, where LLM thoughts are represented as vertices and edges represent dependencies between thoughts.
3. GoT allows for the combination of arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops.
4. GoT offers advantages over state-of-the-art paradigms such as Chain-of-Thought or Tree of Thoughts (ToT), including increased quality of sorting by 62% over ToT and reduced costs by more than 31%.
5. GoT is extensible with new thought transformations, enabling the development of new prompting schemes.
6. GoT brings LLM reasoning closer to human thinking and brain mechanisms by enabling the modeling of complex networks of thoughts.
7. The GoT framework includes a modular architecture that allows for fine-grained control over individual thoughts and seamless integration with different LLM models.
8. GoT has been applied to various tasks, including sorting, keyword counting for summaries, set operations, and document merging.
9. GoT demonstrates higher quality outcomes and reduced costs compared to other prompting schemes such as Chain-of-Thought, Tree of Thoughts, and Input-Output prompting.

Reference: https://arxiv.org/abs/2308.096...