Skip to yearly menu bar Skip to main content


3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow

Felix Taubner · Prashant Raina · Mathieu Tuli · Eu Wern Teh · Chul Lee · Jinmiao Huang

Arch 4A-E Poster #104
[ ] [ Project Page ]
Wed 19 Jun 10:30 a.m. PDT — noon PDT


Improving the 3D facial fidelity and avoiding the uncanny valley effect is critically dependent on accurate 3D facial performance capture. Due to the widespread availability of 2D videos, recent methods focus on monocular 3D face tracking. However, these methods often fall short in capturing precise facial movements due to limitations in their network architecture, training, and evaluation processes. Addressing these challenges, we propose a novel face tracker, FlowFace, that introduces an innovative 2D alignment network for dense per-vertex alignment. Unlike prior work, FlowFace is trained on high-quality 3D scan annotations rather than weak supervision or synthetic data. Our 3D model fitting module jointly fits a 3D face model from one or many observations, integrating existing neutral shape priors for enhanced identity and expression disentanglement and per-vertex deformations for detailed facial feature reconstruction. Additionally, we propose a novel metric and benchmark for assessing tracking accuracy. Our method exhibits superior performance on both custom and publicly available benchmarks. We further validate the effectiveness of our tracker by generating high-quality data, which leads to performance gains on downstream tasks.

Live content is unavailable. Log in and register to view live content