Poster
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning
Hao Zhu · Yifei Zhang · Junhao Dong · Piotr Koniusz
[
Abstract
]
Abstract:
Continual learning requires models to learn tasks sequentially while maintaining a delicate balance between stability (retaining knowledge of previous tasks) and plasticity (adapting to new tasks). A key challenge is preventing interference between tasks - where learning new tasks degrades performance on previously learned ones. Recent approaches have leveraged parameter-efficient fine-tuning (PEFT) methods, which adapt pre-trained models by injecting a small number of learnable parameters. However, existing PEFT-based continual learning methods like InfLoRA face fundamental limitations: they rely on complex optimization procedures to learn orthogonal task-specific spaces, and finding such spaces becomes increasingly difficult as tasks accumulate. We propose a novel bilinear reformulation that fundamentally reimagines task separation through fixed orthogonal bases. Our key insight is that by expanding the parameter space quadratically through two fixed bases, we can achieve "almost orthogonal" task subspaces probabilistically, eliminating the need for explicit interference elimination procedures. We provide theoretical guarantees that this approach reduces the probability of task interference from to , ensuring reliable task separation without complex optimization. Through extensive experiments on ImageNet-R, CIFAR100, and DomainNet, we validate our theoretical bounds and demonstrate state-of-the-art performance with reduced parameter count.
Live content is unavailable. Log in and register to view live content