Abdullahi, Shamsu and Usman Danyaro, Kamaluddeen and Zakari, Abubakar and Abdul Aziz, Izzatdin and Amila Wan Abdullah Zawawi, Noor and Adamu, Shamsuddeen (2025) Time-Series Large Language Models: A Systematic Review of State-of-the-Art. IEEE Access, 13. 30235 – 30261. ISSN 21693536
Full text not available from this repository.Abstract
Large Language Models (LLMs) have transformed Natural Language Processing (NLP) and Software Engineering by fostering innovation, streamlining processes, and enabling data-driven decision-making. Recently, the adoption of LLMs in time-series analysis has catalyzed the emergence of time-series LLMs, a rapidly evolving research area. Existing reviews provide foundational insights into time-series LLMs but lack a comprehensive examination of recent advancements and do not adequately address critical challenges in this domain. This Systematic Literature Review (SLR) bridges these gaps by analysing state-of-the-art contributions in time-series LLMs, focusing on architectural innovations, tokenisation strategies, tasks, datasets, evaluation metrics, and unresolved challenges. Using a rigorous methodology based on PRISMA guidelines, over 700 studies from 2020 to 2024 were reviewed, with 59 relevant studies selected from journals, conferences, and workshops. Key findings reveal advancements in architectures and novel tokenization strategies tailored for temporal data. Forecasting dominates the identified tasks with 79.66 of the selected studies, while classification and anomaly detection remain underexplored. Furthermore, the analysis reveals a strong reliance on datasets from the energy and transportation domains, highlighting the need for more diverse datasets. Despite these advancements, significant challenges persist, including tokenization inefficiencies, prediction hallucinations, and difficulties in modelling long-term dependencies. These issues hinder the robustness, scalability, and adaptability of time-series LLMs across diverse applications. To address these challenges, this SLR outlines a research roadmap emphasizing the improvement of tokenization methods, the development of mechanisms for capturing long-term dependencies, the mitigation of hallucination effects, and the design of scalable, interpretable models for diverse time-series tasks. © 2025 The Authors.
Item Type: | Article |
---|---|
Impact Factor: | Cited by: 4; All Open Access, Gold Open Access |
Uncontrolled Keywords: | Metadata; Natural language processing systems; Network security; Spatio-temporal data; Time series; Language model; Large language model; Long-term dependencies; State of the art; Systematic literature review; Time-series large language model; Times series; Tokenization; Anomaly detection |
Depositing User: | Mr Ahmad Suhairi Mohamed Lazim |
Date Deposited: | 08 Jul 2025 16:28 |
Last Modified: | 08 Jul 2025 16:28 |
URI: | http://scholars.utp.edu.my/id/eprint/38930 |