Target-Driven Attack for Large Language Models

Chong Zhang, Mingyu Jin, Dong Shu, Taowen Wang, Dongfang Liu, Xiaobo Jin*

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

6 Downloads (Pure)

Abstract

Current large language models (LLM) provide a strong foundation for large-scale user-oriented natural language tasks. Many users can easily inject adversarial text or instructions through
the user interface, thus causing LLM model security challenges like the language model not giving the correct answer. Although there is currently a large amount of research on black-box attacks, most of these black-box attacks use random and heuristic strategies. It is unclear how these strategies relate to the success rate of attacks and thus effectively improve model robustness. To solve this problem, we propose our target-driven black-box attack method to maximize the KL divergence between the conditional probabilities of the clean text and the attack text to redefine the attack’s goal. We transform the distance maximization problem into two convex optimization problems based on the attack goal to solve the attack text and estimate the covariance. Furthermore, the projected gradient descent algorithm solves the vector corresponding to the attack text. Our target-driven blackbox attack approach includes two attack strategies: token manipulation and misinformation attack. Experimental results on multiple Large Language Models and datasets demonstrate the effectiveness of our attack method.
Original languageEnglish
Title of host publication27th European Conference on Artificial Intelligence
Subtitle of host publicationECAI 2024
EditorsUlle Endriss
Place of Publication Frontiers in Artificial Intelligence and Applications
PublisherIOS Press
ChapterVolume 392
Pages1752
Number of pages1759
Volume392
Edition1
ISBN (Electronic)1879-8314
ISBN (Print)0922-6389
Publication statusPublished - 19 Oct 2024

Fingerprint

Dive into the research topics of 'Target-Driven Attack for Large Language Models'. Together they form a unique fingerprint.

Cite this