Critical Patch-Aware Sparse Prompting with Decoupled Training for Continual Learning on the Edge
Abstract
Continual learning (CL) on edge devices requires not only high accuracy but also training-time efficiency to support on-device model adaptation under limited memory and compute resources. While prompt-based continual learning (PCL) achieves strong performance with few learnable parameters, existing studies primarily optimize accuracy or inference efficiency, overlooking the cost of on-device training. In this paper, we propose CPS-Prompt, a critical patch-aware sparse prompting framework that enhances training efficiency with minimal accuracy loss by combining Critical Patch Sampling (CPS) for task-aware token selection and Decoupled Prompt–Classifier Training (DPCT) for representation alignment. Extensive experiments across three public datasets demonstrate that CPS-Prompt reduces peak memory usage and training time by 36\% and 35\%, respectively, while maintaining accuracy within 2\% of the state-of-the-art method, C-Prompt, and matching the balanced CODA-Prompt baseline.