Abstract
In the domain of Large Vision-Language Models (LVLMs), securing these models has emerged as a critical issue for both researchers and practitioners. In this paper, we highlight and analyze the security-related issues of LVLMs, with a special emphasis on the reliability challenges in practical deployments. We begin by reviewing recent studies on threats like jailbreak and backdoor attacks, alongside discussing the potential countermeasures implemented to mitigate these risks. Additionally, we touch on real-world application problems, such as hallucinations and privacy leakages, as well as the ethical and legal related researches around them. We also outline the shortcomings observed in current studies and discuss directions for future research, with the aim of promoting LVLMs towards a safer direction. A curated list of LVLMs-security-related resources is also available at https://github.com/MingyuJ666/LVLM-Safety.
Original language | English |
---|---|
Article number | 1 |
Pages (from-to) | 3 |
Number of pages | 22 |
Journal | Communications in Computer and Information Science |
Volume | 2315 |
Publication status | Published - 27 Dec 2024 |
Fingerprint
Dive into the research topics of 'Large Vision-Language Model Security: A Survey'. Together they form a unique fingerprint.Cite this
Wang, T., Fang, Z., Xue, H., Zhang, C., Jin, M., Xu, W., Shu, D., Yang, S., Wang, Z., & Liu, D. (2024). Large Vision-Language Model Security: A Survey. Communications in Computer and Information Science, 2315, 3. Article 1.