Personalized Federated Training of Diffusion Models with Privacy Guarantees
Abstract
We propose a federated framework for training diffusion models on decentralized and private datasets. The method learns a shared generative model together with personalized client models, which allows clients to benefit from cross-client structure while ensuring that the shared model cannot reproduce any client’s data on its own. We provide formal differential privacy guarantees for each client and establish utility bounds for conditional generation under a Gaussian mixture model, showing that collaboration improves sample quality relative to private non-collaborative training. Experiments on CIFAR-10, Colorized MNIST, and CelebA support these results: the method generates high-fidelity samples, improves performance on minority and underrepresented classes, and maintains strong protection against membership inference, memorization, and reconstruction attacks.