Abstract
Ontology alignment is critical in cross-domain integration; however, it typically necessitates the involvement of a human domain-expert, which can make the task costly. Although a variety of machine-learning approaches have been proposed that can simplify this task by learning the patterns from experts, such techniques are still susceptible to domain knowledge updates that could potentially change the patterns and lead to extra expert involvement. The use of Large Language Models (LLMs) has demonstrated a general cognitive ability, which has the potential to assist ontology alignment from the cognition level, thus obviating the need for costly expert involvement. However, the process by which the output of LLMs is generated can be opaque and thus the reliability and interpretability of such models is not always predictable. This paper proposes a dialogue model, in which multiple agents negotiate the correspondence between two knowledge sets with the support from an LLM. We demonstrate that this approach not only reduces the need for the involvement of a domain expert for ontology alignment, but that the results are interpretable despite the use of LLMs.
Original language | English |
---|---|
Pages (from-to) | 2594-2596 |
Number of pages | 3 |
Journal | Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS |
Volume | 2024-May |
Publication status | Published - 2024 |
Event | 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024 - Auckland, New Zealand Duration: 6 May 2024 → 10 May 2024 |
Keywords
- Dialogue
- Large Language Model
- Multi-Agent System
- Negotiation
- Ontology Alignment