UIKA: Fast Universal Head Avatar from Pose-Free Images
Abstract
We present UIKA, a feed-forward animatable Gaussian head model from an arbitrary number of unposed inputs, including a single image, multi-view captures, and smartphone-captured videos. Unlike the traditional avatar method, which requires a studio-level multi-view capture system and reconstructs a human-specific model through a long-time optimization process, we rethink the task through the lenses of model representation, network design, and data preparation. First, we introduce a UV-guided avatar modeling strategy, in which each input image is associated with a pixel-wise UV coordinate estimation. Such UV coordinate estimation allows us to project each valid pixel from screen space to UV space, which is independent of camera pose and character expression. We thus leverage this UV space to represent our Gaussian head avatar. To this end, we design learnable UV tokens on which the attention mechanism can be applied at both the screen and UV levels. The learned UV token can be decoded into canonical Gaussian attributes using aggregated UV information from all input views. Such a Gaussian avatar is directly animatable via standard linear blend skinning and supports real-time rendering. To train our large avatar model, we further prepare a large-scale, identity-rich training dataset with controllable views and motions, synthesized with a 3D GAN and a state-of-the-art image animation model. Our proposed method significantly outperforms existing approaches in rendering quality, 3D consistency, and inference efficiency on both single-view and multi-view input data.