Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Junjie Xiong*, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu*, Lingyao Li*

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

373 Downloads (Pure)

Abstract

Large Language Models (LLMs) are increasingly equipped with capabilities of real-time web search and integrated with protocols like Model Context Protocol (MCP). This extension could introduce new security vulnerabilities. We present a systematic investigation of LLM vulnerabilities to hidden adversarial prompts through malicious font injection in external resources like webpages, where attackers manipulate code-to-glyph mapping to inject deceptive content which are invisible to users. We evaluate two critical attack scenarios: (1) "malicious content relay" and (2) "sensitive data leakage" through MCP-enabled tools. Our experiments reveal that indirect prompts with injected malicious font can bypass LLM safety mechanisms through external resources, achieving varying success rates based on data sensitivity and prompt design. Our research underscores the urgent need for enhanced security measures in LLM deployments when processing external content.
Original languageEnglish
Title of host publicationThe 2025 Conference on Empirical Methods in Natural Language Processing
Subtitle of host publicationEMNLP 2025
EditorsChristos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Place of PublicationSuzhou, China
PublisherAssociation for Computational Linguistics (ACL)
Chapter2025.findings-emnlp
Pages7133-7147
Number of pages14
Volume2505
Edition16957
Publication statusPublished - 3 Nov 2025
EventThe Conference on Empirical Methods in Natural Language Processing 2025: EMNLP - Suzhou International Expo Center, Suzhou, China, Suzhou, China
Duration: 5 Nov 20259 Nov 2025
https://2025.emnlp.org/

Publication series

NameAssociation for Computational Linguistics

Conference

ConferenceThe Conference on Empirical Methods in Natural Language Processing 2025
Country/TerritoryChina
CitySuzhou
Period5/11/259/11/25
Internet address

Fingerprint

Dive into the research topics of 'Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models'. Together they form a unique fingerprint.

Cite this