SG-LoRA: Semantic-guided LoRA Parameters Generation
Miaoge Li ⋅ Yang Chen ⋅ Zhijie Rao ⋅ Can Jiang ⋅ Kang Wei ⋅ Jingcai Guo
Abstract
Generating new Low-Rank Adaptation (LoRA) weights from pre-trained LoRAs has demonstrated strong generalization capabilities across various tasks, enabling the efficient transfer of AI models, particularly on resource-constrained edges. However, previous studies either merge base LoRAs via weighting coefficients or train a generative model under the closed-world assumption, limiting their efficiency and flexibility in complex edge user cases. This challenge may further increase when there are significant domain shifts between training and deployment. To this end, we propose $S$emantic-$G$uided $LoRA$ Parameter Generation ($SG$-$LoRA$), a tuning-free generative framework to efficiently produce task-specific parameters for unseen tasks in a semantic-to-LoRA pipeline. Concretely, SG-LoRA uses task descriptions as the semantic bridge, measuring their proximity to a set of known expert tasks in a shared embedding space. Based on this semantic guidance, it models the target task's LoRA parameter distribution to generate high-performing parameters for novel tasks. SG-LoRA enables the real-time construction of LoRA models aligned with individual intents by distilling knowledge from prominent LoRA experts, while also offering a privacy-preserving solution for personalized model adaptation in a novel zero-shot open-world setting proposed in this work. Extensive experiments on multiple challenging tasks confirm the superior performance and remarkable adaptability of SG-LoRA. The code is attached in the supplementary material.
Successful Page Load