Skip to yearly menu bar Skip to main content


Poster

A4A: Adapter for Adapter Transfer via All-for-All Mapping for Cross-Architecture Models

Keyu Tu · Mengqi Huang · Zhuowei Chen · Zhendong Mao


Abstract:

Large-scale text-to-image models evolve rapidly in size and architecture. The existing adapters struggle to keep pace with these models, requiring extensive retraining. This paper proposes a novel adapter transfer framework, A4A (Adapter for Adapter), which uses an all-for-all mapping approach to seamlessly transfer attention-based adapters across different model architectures (e.g., U-Net to transformer). The framework consists of Coupling Space Projection and Upgraded Space Mapping. During Coupling Space Projection, all attention features of the pre-trained adapter are collected to capture the complete coupling relationship with the base model and then projected into the unified space. Randomly initialized learnable features in the upgraded model are introduced to connect the unified space and upgraded space. By integrating the reference features through the attention mechanism and aligning them with the upgraded architecture, the learnable features bridge the discrepancies between the models. Experimental results on personalized image generation tasks demonstrate that A4A outperforms previous methods in transferring adapters while being the first to achieve adapter transfer across model architectures.

Live content is unavailable. Log in and register to view live content