The proposed research seeks to investigate the integration of large language models (LLMs) into edge computing systems with the goal of enhancing resource allocation efficiency. Currently, efficient resource allocation in edge computing remains an open research question due to the dynamic nature of user demands and the heterogeneity of edge devices. The proposed approach leverages the advanced language understanding and reasoning capabilities of LLMs to develop a more flexible and adaptive resource allocation strategy. By designing fine-grained prompt engineering templates along with suggestions from traditional heuristic algorithms, the model can be instructed to generate context-aware recommendations for resource allocation. The final system will be evaluated on various performance metrics, such as resource utilization and service availability, to demonstrate its effectiveness. Overall, this project has the potential to make significant contributions to the field of edge computing by introducing a novel approach to resource allocation that leverages the power of LLMs.