Weight Space Representation Learning via Neural Field Adaptation
Zhuoqian Yang ⋅ Mathieu Salzmann ⋅ Sabine Süsstrunk
Abstract
In this work, we investigate the potential of weights to serve as effective representations, focusing on neural fields. Our key insight is that constraining the optimization space through a pre-trained base model and multiplicative low-rank adaptation (mLoRA) can induce structure in weight space. Across reconstruction, generation, and analysis tasks on 2D and 3D data, we find that mLoRA weights achieve high representation quality while exhibiting distinctiveness and semantic structure. When used with latent diffusion models, mLoRA weights enable higher-quality generation than existing weight-space methods. Source code will be made publicly available.
Successful Page Load