TY - GEN
T1 - Injecting Commonsense Knowledge into Prompt Learning for Zero-Shot Text Classification
AU - Qian, Jing
AU - Chen, Qi
AU - Yue, Yong
AU - Atkinson, Katie
AU - Li, Gangmin
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/2/17
Y1 - 2023/2/17
N2 - The combination of pre-training and fine-tuning has become a default solution to Natural Language Processing (NLP) tasks. The emergence of prompt learning breaks such routine, especially in the scenarios of low data resources. Insufficient labelled data or even unseen classes are frequent problems in text classification, equipping Pre-trained Language Models (PLMs) with task-specific prompts helps get rid of the dilemma. However, general PLMs are barely provided with commonsense knowledge. In this work, we propose a KG-driven verbalizer that leverages commonsense Knowledge Graph (KG) to map label words with predefined classes. Specifically, we transform the mapping relationships into semantic relevance in the commonsense-injected embedding space. For zero-shot text classification task, experimental results exhibit the effectiveness of our KG-driven verbalizer on a Twitter dataset for natural disasters (i.e. HumAID) compared with other baselines.
AB - The combination of pre-training and fine-tuning has become a default solution to Natural Language Processing (NLP) tasks. The emergence of prompt learning breaks such routine, especially in the scenarios of low data resources. Insufficient labelled data or even unseen classes are frequent problems in text classification, equipping Pre-trained Language Models (PLMs) with task-specific prompts helps get rid of the dilemma. However, general PLMs are barely provided with commonsense knowledge. In this work, we propose a KG-driven verbalizer that leverages commonsense Knowledge Graph (KG) to map label words with predefined classes. Specifically, we transform the mapping relationships into semantic relevance in the commonsense-injected embedding space. For zero-shot text classification task, experimental results exhibit the effectiveness of our KG-driven verbalizer on a Twitter dataset for natural disasters (i.e. HumAID) compared with other baselines.
KW - knowledge graph
KW - prompt learning
KW - zero-shot text classification
UR - http://www.scopus.com/inward/record.url?scp=85173894088&partnerID=8YFLogxK
U2 - 10.1145/3587716.3587787
DO - 10.1145/3587716.3587787
M3 - Conference Proceeding
AN - SCOPUS:85173894088
T3 - ACM International Conference Proceeding Series
SP - 427
EP - 432
BT - ICMLC 2023 - Proceedings of the 2023 15th International Conference on Machine Learning and Computing
PB - Association for Computing Machinery
T2 - 15th International Conference on Machine Learning and Computing, ICMLC 2023
Y2 - 17 February 2023 through 20 February 2023
ER -