Key Points

1. Vision-Language Models (VLMs) integrate textual and visual inputs and have become increasingly powerful with the incorporation of Large Language Models (LLMs).

2. The Red Teaming Visual Language Model (RTVLM) dataset was introduced to benchmark current VLMs in terms of faithfulness, privacy, safety, and fairness, revealing that open-sourced VLMs struggle with red teaming in various scenarios.

3. Performance analysis using GPT-4V shows that open-sourced VLMs exhibit up to a 31% performance gap compared to GPT-4V and lack red teaming alignment.

4. The RTVLM dataset comprises 10 subtasks distributed across faithfulness, privacy, safety, and fairness, with 5,200 samples created to test VLMs' performance under challenging scenarios.

5. The evaluation of VLMs on RTVLM reveals that most VLMs struggle to accurately discern textual content within images and lack alignment in terms of privacy protection and red teaming scenarios.

6. Human evaluation metrics align with the main results evaluated by GPT-4V, indicating a high level of reliability in human assessments, while GPT-4V's results align more closely with human judgments, enhancing reliability.

7. The study presents the effectiveness of enriched red teaming alignment data in improving a model's performance on red teaming tasks without major changes in downstream task performance.

8. Advancements in LLMs have significantly impacted the evolution of VLMs, and various model types and methods have been investigated for red teaming purposes.

9. The study highlights the significance of VLM security and proposes the first VLM red teaming dataset, RTVLM, to draw attention to VLM security and provide insights for enhancing it.

Summary

The paper introduces the Red Teaming Visual Language Model (RTVLM) dataset to assess the performance of Vision-Language Models (VLMs) in scenarios involving image-text input. The study focuses on faithfulness, safety, privacy, and fairness aspects and reveals that all 10 prominent open-sourced VLMs struggle in red teaming challenges, displaying up to a 31% performance gap compared to GPT-4V. Additionally, the lack of alignment in red teaming among current VLMs is discussed, and a method that enhances a model's performance on the RTVLM test set is mentioned.

The paper presents in-depth details of the RTVLM dataset, comprising 10 subtasks distributed across 4 aspects, and discusses the detailed construction process of each task. The evaluation of the VLMs on the RTVLM dataset shows that most models struggle with red teaming challenges, especially in scenarios involving misleading information mixed with images. The paper also investigates the reliability of using GPT-4V or GPT-4 as evaluators for VLM red teaming and demonstrates that using RTVLM as supervised fine-tuning (SFT) data can enhance the safety and robustness of the model without major changes in downstream task performance.

Furthermore, the paper explores the alignment methods for red teaming and demonstrates the effectiveness of using RTVLM for training on model security. The study emphasizes the importance of VLM security and provides insights for enhancing it.

Reference: https://arxiv.org/abs/2401.12915