Skip to yearly menu bar Skip to main content


Poster

AdaDARE-γ: Balancing Stability and Plasticity in Multi-modal LLMs through Efficient Adaptation

Jingyi Xie · Jintao Yang · Zhunchen Luo · Yunbo Cao · Qiang Gao · Mengyuan Zhang · Wenpeng Hu


Abstract: Adapting Multi-modal Large Language Models (MLLMs) to target tasks often suffers from catastrophic forgetting, where acquiring new task-specific knowledge compromises performance on pre-trained tasks. In this paper, we introduce AdaDARE-γ, an efficient approach that alleviates catastrophic forgetting by controllably injecting new task-specific knowledge through adaptive parameter selection from fine-tuned models without requiring retraining procedures. This approach consists two key innovations: (1) an adaptive parameter selection mechanism that identifies and retains the most task-relevant parameters from fine-tuned models, and (2) a controlled task-specific information injection strategy that precisely balances the preservation of pre-trained knowledge with the acquisition of new capabilities. Theoretical analysis proves the optimality of our parameter selection strategy and establishes bounds for the task-specific information injection factor. Extensive experiments on InstructBLIP and LLaVA-1.5 across image captioning and visual question answering tasks demonstrate that AdaDARE-γ establishes new state-of-the-art results in balancing model performance. Specifically, it maintains 98.2\% of pre-training effectiveness on original tasks while achieving 98.7\% of standard fine-tuning performance on target tasks.

Live content is unavailable. Log in and register to view live content