FAAR: Efficient Frequency-Aware Multi-Task Fine-Tuning via Automatic Rank Selection
Abstract
Adapting models pre-trained on large-scale datasets is a proven way to reach strong performance quickly for downstream tasks. However, the constant growth of state-of-the-art models makes traditional full fine-tuning unsuitable and difficult, especially for multi-task learning (MTL) where cost scales with the number of tasks. As a result, recent studies investigate parameter-efficient fine-tuning (PEFT) using low-rank adaptation to significantly reduce the number of trainable parameters. However, these existing methods use a single, fixed rank, which may not be optimal for different tasks or positions in the MTL architecture. Moreover, these methods fail to learn spatial information that captures inter-task relationships and helps to improve diverse task predictions. This paper introduces Frequency-Aware and Automatic Rank (FAAR) for efficient MTL fine-tuning. Our method introduces Performance-Driven Rank Shrinking (PDRS) to allocate the optimal rank per adapter location and per task. Moreover, by analyzing the image frequency spectrum, FAAR proposes a Task-Spectral Pyramidal Decoder (TS-PD) that injects input-specific context into spatial bias learning to better reflect cross-task relationships. Experiments performed on dense visual task benchmarks show the superiority of our method in terms of both accuracy and efficiency compared to other PEFT methods in MTL. FAAR reduces the number of parameters by up to 10.3 times compared to traditional MTL fine-tuning whilst boosting performance on all tasks.