Introduction and Motivation

In this scientific article, the authors propose a new method called Active-Prompt to adapt large language models (LLMs) to different tasks by selecting task-specific example prompts. The authors argue that the effective design of prompts is critical for LLMs' ability to produce high-quality answers. They focus on complex question-and-answer tasks and introduce an approach called chain-of-thought (CoT) reasoning, which significantly improves the performance of LLMs. However, current CoT methods rely on a fixed set of human-annotated exemplars, which may not be the most effective examples for different tasks.

The Active-Prompt Method

To address this issue, the proposed Active-Prompt method leverages uncertainty to select the most important and helpful questions for annotation. The authors borrow ideas from the related problem of uncertainty-based active learning and introduce several metrics to characterize uncertainty. These metrics are used to select the most uncertain questions for annotation from a pool of task-specific queries.

Experimental Results and Analysis

Experimental results demonstrate the superiority of the proposed method, achieving state-of-the-art performance on eight complex reasoning tasks. The authors also analyze the effects of different uncertainty metrics, pool sizes, zero-shot learning, and the relationship between accuracy and uncertainty. Conclusion and Implications Overall, Active-Prompt is a promising method for adapting LLMs to different tasks by selecting task-specific example prompts based on uncertainty metrics. It improves the performance of LLMs on complex reasoning tasks and demonstrates the benefits of active question selection in chain-of-thought prompting.

Reference: https://arxiv.org/abs/2302.12246