Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration
Danil Tokhchukov ⋅ Aysel Mirzoeva ⋅ Andrey Kuznetsov ⋅ Konstantin Sobolev
Abstract
In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose *Calibri*, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. *Calibri* frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just $\sim 10^2$ parameters. Additionally, *Calibri* introduces an innovative inference-time ensemble scaling strategy to further boost generative performance. Experimental results reveal that despite its lightweight design, *Calibri* consistently improves performance across various text-to-image models. Notably, *Calibri* also reduces the inference steps required for image generation, all while maintaining high-quality outputs.
Successful Page Load