Key Points

- The paper introduces MentaLLaMA, a framework for interpretable mental health analysis on social media using large language models (LLMs).

- Traditional discriminative methods for mental health analysis on social media lack interpretability, prompting the exploration of LLMs for interpretable mental health analysis.

- The authors present the challenges faced by LLMs, including unsatisfactory classification performance and the need for domain-specific fine-tuning.

- To address these challenges, the authors model interpretable mental health analysis as text generation tasks and develop the first multi-task and multi-source interpretable mental health instruction (IMHI) dataset with 105K data samples.

- The paper describes the collection of raw social media data from 10 existing sources covering 8 mental health analysis tasks, including depression and stress detection, cause detection, and risk/wellness factors detection.

- The authors use ChatGPT with expert-designed prompts to obtain explanations for the collected data and conduct automatic and human evaluations to ensure the reliability and quality of the explanations.

- Based on the IMHI dataset, the authors propose MentaLLaMA, the first open-source instruction-following LLM series for interpretable mental health analysis. They train and evaluate MentaLLaMA on the IMHI benchmark, demonstrating state-of-the-art performance in correctness and generation of explanations.

- The paper highlights the contributions of formalizing interpretable mental health analysis tasks, proposing MentaLLaMA, and introducing the first holistic evaluation benchmark for interpretable mental health analysis.

- The authors discuss the implications and limitations of their work, emphasizing the importance of using the framework for non-clinical research and addressing potential biases and risks associated with LLMs.

Summary

The paper "MentaLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models" explores the use of large language models (LLMs) for interpretable mental health analysis on social media. The study identifies the limitations of previous discriminative methods in providing low interpretability and introduces MentaLLaMA, the first open-source LLM series for interpretable mental health analysis. The research highlights the challenges of achieving satisfactory classification performance in zero-shot or few-shot learning settings with LLMs and the critical need for high-quality training data. To address these challenges, the study proposes the IMHI dataset, the first multi-task and multi-source interpretable mental health instruction dataset with 105K data samples to support LLM instruction tuning.

Additionally, the study evaluates the performance of MentaLLaMA on the IMHI benchmark and demonstrates its approach to state-of-the-art discriminative methods in correctness and generation of ChatGPT-level explanations. The paper emphasizes the potential implications of LLMs in improving interpretability and reliability in mental health analysis. The study also addresses privacy concerns, ethical principles, and the need for caution in using LLMs for non-clinical research, highlighting potential biases and the importance of professional intervention for individuals seeking help.

The findings provide insights into the advancements and challenges in utilizing LLMs for interpretable mental health analysis and demonstrate the potential of MentaLLaMA in addressing these challenges.

Reference: https://arxiv.org/abs/2309.13567