AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models

Dong Shu, Mingyu Jin, Chong Zhang, Lingyao Li, Zihao Zhou, Yongfeng Zhang

Research output: Chapter in Book or Report/Conference proceedingChapter

20 Downloads (Pure)

Abstract

Ensuring the security of large language models (LLMs) against attacks has become increasingly urgent, with jailbreak attacks representing one of the most sophisticated threats. To deal with such risks, we introduce an innovative framework that can help evaluate the effectiveness of jailbreak attacks on LLMs. Unlike traditional binary evaluations focusing solely on the robustness of LLMs, our method assesses the effectiveness of the attacking prompts themselves. We present two distinct evaluation frameworks: a coarse-grained evaluation and a fine-grained evaluation. Each framework uses a scoring range from 0 to 1, offering unique perspectives and allowing for the assessment of attack effectiveness in different scenarios. Additionally, we develop a comprehensive ground truth dataset specifically tailored for jailbreak prompts. This dataset serves as a crucial benchmark for our current study and provides a foundational resource for future research. By comparing with traditional evaluation methods, our study shows that the current results align with baseline metrics while offering a more nuanced and fine-grained assessment. It also helps identify potentially harmful attack prompts that might appear harmless in traditional evaluations. Overall, our work establishes a solid foundation for assessing a broader range of attack prompts in the area of prompt injection.
Original languageEnglish
Title of host publicationArxiv
Publication statusAccepted/In press - Dec 2024

Fingerprint

Dive into the research topics of 'AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models'. Together they form a unique fingerprint.

Cite this