Skip to yearly menu bar Skip to main content


Poster

AdMiT: Adaptive Multi-Source Tuning in Dynamic Environments

Xiangyu Chang · Fahim Faisal Niloy · Sk Miraj Ahmed · Srikanth Krishnamurthy · Basak Guler · Ananthram Swami · Samet Oymak · Amit K. Roy-Chowdhury


Abstract:

Incorporating transformer models into edge devices poses a significant challenge due to the computational demands of adapting these large models across diverse applications. Parameter-efficient tuning (PET) methods like LoRA, Adapter, and Visual Prompt Tuning (VPT) allow for targeted adaptation by modifying only small parts of the transformer model. However, adapting to dynamic, unlabeled target distributions at test time remains complex. To address this, we introduce AdMiT: Adaptive Multi-Source Tuning in Dynamic Environments. AdMiT innovates by pre-training a set of PET modules, each optimized for different source distributions or tasks, and dynamically selecting and integrating a sparse subset of relevant modules when encountering a new, few-shot, unlabeled target distribution. This integration leverages Kernel Mean Embedding (KME)-based matching to align the target distribution with relevant source knowledge efficiently, without requiring additional routing networks or hyperparameter tuning. AdMiT achieves adaptation with a single inference step, making it particularly suitable for resource-constrained edge deployments. Furthermore, AdMiT preserves privacy by performing adaptation locally on each edge device, with no data exchange required. Our theoretical analysis establishes guarantees for AdMiT's generalization, while extensive benchmarks demonstrate that AdMiT consistently outperforms other PET methods across a range of tasks, achieving robust and efficient adaptation.

Live content is unavailable. Log in and register to view live content