AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models: AttackEval

  • Dong Shu
  • , Mingyu Jin
  • , Chong Zhang
  • , Lingyao Li
  • , Zihao Zhou
  • , Yongfeng Zhang

Research output: Contribution to journalArticlepeer-review

469 Downloads (Pure)

Abstract

Jailbreak attacks represent one of the most sophisticated threats to the security of large language models (LLMs). To deal with such risks, we introduce an innovative framework that can help evaluate the effectiveness of jailbreak attacks on LLMs. Unlike traditional binary evaluations focusing solely on the robustness of LLMs, our method assesses the attacking prompts’ effectiveness. We present two distinct evaluation frameworks: a coarse-grained evaluation and a fine-grained evaluation. Each framework uses a scoring range from 0 to 1, offering unique perspectives and allowing for the assessment of attack effectiveness in different scenarios. Additionally, we develop a comprehensive ground truth dataset specifically tailored for jailbreak prompts. This dataset is a crucial benchmark for our current study and provides a foundational resource for future research. By comparing with traditional evaluation methods, our study shows that the current results align with baseline metrics while offering a more nuanced and fine-grained assessment. It also helps identify potentially harmful attack prompts that might appear harmless in traditional evaluations. Overall, our work establishes a solid foundation for assessing a broader range of attack prompts in prompt injection.
Original languageEnglish
Article number2
Pages (from-to)10-19
Number of pages10
JournalACM SIGKDD Explorations Newsletter
Volume27
Issue number1
DOIs
Publication statusPublished - 31 May 2025

Fingerprint

Dive into the research topics of 'AttackEval: How to Evaluate the Effectiveness of Jailbreak Attacking on Large Language Models: AttackEval'. Together they form a unique fingerprint.

Cite this