Key Points

1. Large language models (LLMs) like ChatGPT and LLaMA are influential in natural language processing for their strong text encoding/decoding ability and emergent capability for reasoning.

2. Text data in real-world scenarios often come with rich structure in the form of graphs, such as academic and e-commerce networks, and graph data may be paired with textual information, such as molecules with descriptions.

3. The paper provides a systematic review of scenarios and techniques related to large language models on graphs, categorizing scenarios into pure graphs, text-rich graphs, and text-paired graphs.

4. Techniques for utilizing LLMs on graphs include LLM as Predictor, LLM as Encoder, and LLM as Aligner, with each having its advantages and disadvantages.

5. Existing surveys have focused more on graph neural networks (GNNs) rather than LLMs, or failed to provide a systematic perspective on their applications in various graph scenarios.

6. The paper makes notable contributions including categorization of graph scenarios, systematic review of techniques, collection of resources on language models on graphs, and proposing potential future research directions in this field.

7. The paper discusses graph reasoning tasks, text-rich graphs, and LLM as Predictor, Encoder, and Aligner techniques for utilizing LLMs on graphs.

8. Researchers have explored the intersection between LLMs and graphs, particularly with techniques for adopting LLMs on graphs into categories like pure graphs, text-rich graphs, and text-paired graphs.

9. Further research areas in this field may include exploring LLMs as encoders in a cascaded architecture, going beyond representation learning, and identifying opportunities for advanced graph-empowered LLM techniques.

Summary

Overview of LLMs on Graph-Based Applications
The research paper reviewed in this summary provides a systematic review of scenarios and techniques related to large language models (LLMs) on graph-based applications. The paper explores the potential of LLMs in addressing fundamental graph reasoning problems and categorizes LLMs on graph scenarios into three main categories: pure graphs, text-rich graphs, and text-paired graphs. The article discusses detailed techniques for utilizing LLMs on graphs, including LLM as Predictor, LLM as Encoder, and LLM as Aligner, and compares the advantages and disadvantages of different models. The researchers summarize potential future research directions in this fast-growing field and provide a comprehensive review of the LLMs on graphs, targeting broader researchers from diverse backgrounds. The paper also introduces the background of LLMs and Graph Neural Networks, lists commonly used notations, and defines related concepts. Additionally, the researchers provide definitions and examples of graphs with node-level textual information, graphs with edge-level textual information, and graph-level textual information. The article also discusses the potential future research directions in the field, such as advanced knowledge distillation and going beyond representation learning in LLM-GNN cascaded architectures.

Techniques and Applications of LLMs in Graphs
The paper provides a comprehensive overview of the utilization of large language models (LLMs) in graph-based applications. It covers the classification of LLMs on graph scenarios and explores their potential in addressing fundamental graph reasoning problems. The paper compares existing surveys on LLMs and graphs as well as explores techniques in utilizing LLMs in graph-based scenarios. The paper discusses the potential of a Large Language Model-Graph Neural Network (LLM-GNN) cascaded pipeline in capturing both text and structure information. It also explores the concept of LLM as an Aligner method, where the LLM is trained with text data and the GNN is trained with structure data iteratively to contribute to a final joint text and graph encoding. Additionally, it explores methods that utilize LLMs for graph inputs, such as rule-based graph linearization, tokenization, and encoding the linearized graph with LLMs using encoder-only, encoder-decoder, and decoder-only LLMs. The paper also covers various datasets, open-source codebases, and multiple applications of LLMs on text-rich graphs, molecular graphs, and social media platforms. Furthermore, it suggests future research directions, such as better benchmark datasets, multi-modal foundation models, efficient LLMs on graphs, generalizable and robust LLMs on graphs, and LLM as dynamic agents on graphs. The paper provides an extensive and detailed contribution to the research in the field of utilizing large language models in graph-based applications.

Research Findings on LLMs and Graph-Based Applications
The paper is a comprehensive review of the use of large language models (LLMs) in graph-based applications. It categorizes different LLMs applied to graph scenarios, explores their potential in addressing fundamental graph reasoning problems, and compares existing surveys on LLMs and graphs. The authors thoroughly summarize and analyze a wide array of papers and preprints, covering topics such as graph-based pre-training, label-free node classification on graphs, few-shot learning on text-attributed graphs, graph reasoning in text space, and language modeling based on global contexts via graph neural networks. They also delve into areas such as graphNLP, representation learning on textual graphs, and the exploitation of graph reasoning ability through LLMs. The paper synthesizes research on leveraging LLMs for text classification, graph reasoning, question answering, and other tasks in graph-based applications. It discusses the evaluation of LLMs' performance on graphs, benchmarking, and challenges in utilizing LLMs for graph problems. The paper also addresses the need for better benchmarks for machine learning on graphs and highlights the importance of developing more efficient training methods.

Furthermore, it examines the potential of LLMs in speeding up the process of molecular discovery and optimization, as well as their role in drug discovery and virtual screening. Lastly, the paper emphasizes the significance of data-centric learning from unlabeled graphs and scaling deep learning for materials discovery.

Reference: https://arxiv.org/abs/2312.02783