TY - GEN
T1 - Large Language Models for HeXie Management Theory
T2 - 2025 9th International Conference on Control Engineering and Artificial Intelligence, CCEAI 2025
AU - Xu, Yulu
AU - Chen, Shishuo
AU - Tang, Lisirui
AU - Yun, Jiya
AU - Li, Gangmin
AU - Wang, Chengyu
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/5/13
Y1 - 2025/5/13
N2 - HeXie Management Theory (HXMT) is a modern management theory for organizations. It provides macro-level considerations for both the internal mechanisms of an organization and its overall operational patterns. It has been utilized in organizations such as healthcare, rural construction, university management, and large-scale engineering projects and has proved useful. There are press demands for its wider adoption. Large Language Models (LLMs) have been widely used in natural language processing and content generation. Re-training LLMs will help them possess HeXie management theory, which can be useful. However, there are two popular methods to achieve this goal: fine-tuning and RAG; each approach has pros and cons. This paper reports our efforts in a comparative study of the two approaches. Our research employs datasets from HXMT and chooses the open-source platforms LlaMA-2, LlaMA-3, and ERNIE-Speed for fine-tuning based on four metrics and manual evaluations, with RAG we used ERNIE models with five dimensions. Our results show that the RAG-trained ERNIE-speed-App performs better than the fine-tuning training ERNIE-speed-8k model under the same training data volume. this may shed some light on similar applications where new theory needs to be integrated into an LLM to make it specialized for particular applications. Our work is available at https://alex17swim.com/hxjun2.
AB - HeXie Management Theory (HXMT) is a modern management theory for organizations. It provides macro-level considerations for both the internal mechanisms of an organization and its overall operational patterns. It has been utilized in organizations such as healthcare, rural construction, university management, and large-scale engineering projects and has proved useful. There are press demands for its wider adoption. Large Language Models (LLMs) have been widely used in natural language processing and content generation. Re-training LLMs will help them possess HeXie management theory, which can be useful. However, there are two popular methods to achieve this goal: fine-tuning and RAG; each approach has pros and cons. This paper reports our efforts in a comparative study of the two approaches. Our research employs datasets from HXMT and chooses the open-source platforms LlaMA-2, LlaMA-3, and ERNIE-Speed for fine-tuning based on four metrics and manual evaluations, with RAG we used ERNIE models with five dimensions. Our results show that the RAG-trained ERNIE-speed-App performs better than the fine-tuning training ERNIE-speed-8k model under the same training data volume. this may shed some light on similar applications where new theory needs to be integrated into an LLM to make it specialized for particular applications. Our work is available at https://alex17swim.com/hxjun2.
KW - Finetune
KW - HeXie Management Theory
KW - Large Language Models
KW - RAG
UR - http://www.scopus.com/inward/record.url?scp=105007287309&partnerID=8YFLogxK
U2 - 10.1145/3722150.3722167
DO - 10.1145/3722150.3722167
M3 - Conference Proceeding
AN - SCOPUS:105007287309
T3 - Proceedings of 2025 9th International Conference on Control Engineering and Artificial Intelligence, CCEAI 2025
SP - 35
EP - 39
BT - Proceedings of 2025 9th International Conference on Control Engineering and Artificial Intelligence, CCEAI 2025
A2 - Dan, Zhang
A2 - Yong, Yue
A2 - Ogiela, Marek
PB - Association for Computing Machinery, Inc
Y2 - 16 January 2025 through 19 January 2025
ER -