Paper Detail
Hongyang Chen, Zhongwu Sun, Hongfei Ye, Kunchi Li, Xuemin Lin
Continual learning (CL) has emerged as a pivotal paradigm to enable large language models (LLMs) to dynamically adapt to evolving knowledge and sequential tasks while mitigating catastrophic forgetting-a critical limitation of the static pre-training paradigm inherent to modern LLMs. This survey presents a comprehensive overview of CL methodologies tailored for LLMs, structured around three core training stages: continual pre-training, continual fine-tuning, and continual alignment.Beyond the canonical taxonomy of rehearsal-, regularization-, and architecture-based methods, we further subdivide each category by its distinct forgetting mitigation mechanisms and conduct a rigorous comparative analysis of the adaptability and critical improvements of traditional CL methods for LLMs. In doing so, we explicitly highlight core distinctions between LLM CL and traditional machine learning, particularly with respect to scale, parameter efficiency, and emergent capabilities. Our analysis covers essential evaluation metrics, including forgetting rates and knowledge transfer efficiency, along with emerging benchmarks for assessing CL performance. This survey reveals that while current methods demonstrate promising results in specific domains, fundamental challenges persist in achieving seamless knowledge integration across diverse tasks and temporal scales. This systematic review contributes to the growing body of knowledge on LLM adaptation, providing researchers and practitioners with a structured framework for understanding current achievements and future opportunities in lifelong learning for language models.
This paper presents a comprehensive survey of continual learning (CL) methodologies for large language models (LLMs), categorizing them into three core stages: continual pre-training, continual fine-tuning, and continual alignment. It systematically analyzes and compares these methods, highlighting key distinctions from traditional CL concerning scale, parameter efficiency, and emergent capabilities.
The survey concludes that while promising, current methods still face fundamental challenges in achieving seamless knowledge integration across diverse tasks and temporal scales.
The research conducts a systematic review and comparative analysis of existing CL methods, proposing a structured framework to organize and evaluate techniques based on their training stage and forgetting mitigation mechanisms.
This work provides researchers and practitioners with a structured guide to understand current achievements, persistent challenges, and future opportunities in enabling LLMs to learn continuously.
No ranking explanation is available yet.
No tags.
@article{chen2026continual,
title = {Continual Learning in Large Language Models: Methods, Challenges, and Opportunities},
author = {Hongyang Chen and Zhongwu Sun and Hongfei Ye and Kunchi Li and Xuemin Lin},
year = {2026},
abstract = {Continual learning (CL) has emerged as a pivotal paradigm to enable large language models (LLMs) to dynamically adapt to evolving knowledge and sequential tasks while mitigating catastrophic forgetting-a critical limitation of the static pre-training paradigm inherent to modern LLMs. This survey presents a comprehensive overview of CL methodologies tailored for LLMs, structured around three core training stages: continual pre-training, continual fine-tuning, and continual alignment.Beyond the ca},
url = {https://arxiv.org/abs/2603.12658},
keywords = {cs.CL, cs.AI},
eprint = {2603.12658},
archiveprefix = {arXiv},
}
{}