Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities

Hua Tang, Chong Zhang, Mingyu Jin, Qinkai Yu, Zhenting Wang, Xiaobo Jin, Yongfeng Zhang, Mengnan Du*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Downloads (Pure)

Abstract

Large language models (LLMs) have been applied in many fields and have developed rapidly in recent years. As a classic machine learning task, time series forecasting has recently been boosted by LLMs. Recent works treat large language models as \emph{zero-shot} time series reasoners without further fine-tuning, which achieves remarkable performance. However, some unexplored research problems exist when applying LLMs for time series forecasting under the zero-shot setting. For instance, the LLMs' preferences for the input time series are less understood. In this paper, by comparing LLMs with traditional time series forecasting models, we observe many interesting properties of LLMs in the context of time series forecasting. First, our study shows that LLMs perform well in predicting time series with clear patterns and trends but face challenges with datasets lacking periodicity. This observation can be explained by the ability of LLMs to recognize the underlying period within datasets, which is supported by our experiments. In addition, the input strategy is investigated, and it is found that incorporating external knowledge and adopting natural language paraphrases substantially improve the predictive performance of LLMs for time series. Our study contributes insight into LLMs' advantages and limitations in time series forecasting under different conditions. The code is at https://github.com/MingyuJ666/Time-Series-Forecasting-with-LLMs.
Original languageEnglish
Article number9
Pages (from-to)109-118
Number of pages119
JournalACM SIGKDD Explorations Newsletter
Volume26
Issue number2
DOIs
Publication statusPublished - 29 Jan 2025

Fingerprint

Dive into the research topics of 'Time Series Forecasting with LLMs: Understanding and Enhancing Model Capabilities'. Together they form a unique fingerprint.

Cite this