Poster
Joint Scheduling of Causal Prompts and Tasks for Multi-Task Learning
Chaoyang Li · Jianyang Qin · Jinhao Cui · Zeyu Liu · Ning Hu · Qing Liao
Multi-task prompt learning has emerged as a promising technique for fine-tuning pre-trained Vision-Language Models (VLMs) to various downstream tasks. However, existing methods ignore challenges caused by spurious correlations and dynamic task relationships, which may reduce the model performance. To tackle these challenges, we propose JSCPT, a novel approach for \textit{Joint Scheduling of Causal Prompts and Tasks} to enhance multi-task prompt learning. Specifically, we first design a \textit{Multi-Task Vison-Language Prompt} (MTVLP) model, which learns task-shared and task-specific vison-language prompts and selects useful prompt features via causal intervention, alleviating spurious correlations. Then, we propose the task-prompt scheduler that models inter-task affinities and assesses the causal effect of prompt features to optimize the multi-task prompt learning process. Finally, we formulate the scheduler and the multi-task prompt learning process as a bi-level optimization problem to optimize prompts and tasks adaptively. In the lower optimization, MTVLP is updated with the scheduled gradient, while in the upper optimization, the scheduler is updated with the implicit gradient. Extensive experiments show the superiority of our proposed JSCPT approach over several baselines in terms of multi-task prompt learning for pre-trained VLMs.
Live content is unavailable. Log in and register to view live content