Bridging Privacy and Provenance: Traceable Virtual Identity Generation
Abstract
Recent advances in generative models have enabled the creation of high-fidelity human faces, yet constructing reliable virtual identities that preserve user privacy while supporting consistent and verifiable identity assignment remains challenging. In this paper, we propose a diffusion-based framework for generating traceable virtual identities with stable identity semantics, pose and expression preservation. Our framework couples a virtual identity sampler that generates diverse but consistent identity embeddings with a 3D geometry and expression conditioning module that preserves the pose and non-identity characteristics of the input face. In addition, we incorporate a lightweight latent watermarking mechanism that embeds an imperceptible identity signature during generation, enabling a user to verify ownership of the resulting virtual identity through a secure token without revealing their real facial appearance. Quantitative evaluations demonstrate that our method achieves high identity consistency across repeated sampling, strong pose and expression fidelity, and improved anonymity compared with prior work. These results validate the effectiveness of integrating virtual identity sampling, geometric conditioning, and latent watermarking into a single generative framework, and highlight the practical potential of our solution for constructing privacy-aware virtual identities.