Abstract
In recent years, pre-trained language models have garnered significant attention due to their effectiveness, which stems from the rich knowledge acquired during pre-training. To mitigate the inconsistency issues between pre-training tasks and downstream tasks and to facilitate the resolution of language-related issues, prompt-based approaches have been introduced, which are particularly useful in low-resource scenarios. However, existing approaches mostly rely on verbalizers to translate the predicted vocabulary to task-specific labels. The major limitations of this approach are the ignorance of potentially relevant domain-specific words and being biased by the pre-training data. To address these limitations, we propose a framework that incorporates conceptual knowledge for text classification in the extreme zero-shot setting. The framework includes prompt-based keyword extraction, weight assignment to each prompt keyword, and final representation estimation in the knowledge graph embedding space. We evaluated the method on four widelyused datasets for sentiment analysis and topic detection, demonstrating that it consistently outperforms recently-developed prompt-based approaches in the same experimental settings.
Original language | English |
---|---|
Title of host publication | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 30-38 |
Number of pages | 9 |
DOIs | |
Publication status | Published - 2023 |
Event | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics - Toronto, Canada Duration: 9 Jul 2023 → 14 Jul 2023 |
Conference
Conference | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 9/07/23 → 14/07/23 |