Skip to yearly menu bar Skip to main content


Poster

Recovering Dynamic 3D Sketches from Videos

Jaeah Lee · Changwoon Choi · Young Min Kim · Jaesik Park


Abstract:

Understanding 3D motion from videos present inherent challenges due to the diverse types of movement, ranging from rigid and deformable objects to articulated structures.Furthermore, reconstructing photorealistic dynamic motion remains challenging in underconstrained setups where multi-view information are limited.In this paper, we aim to overcome this by abstracting motions with deformable 3D strokes.The detailed motion of an object may be represented by unstructured motion vectors or a set of motion primitives using a pre-defined articulation from a template model.Just as a free-hand sketch can intuitively visualize scenes or intentions with a sparse set of lines, we utilize a set of parametric 3D curves to capture a set of spatially smooth motion elements for general objects with unknown structures.We first extract noisy, 3D point cloud motion guidance from video frames using semantic features, and our approach deforms a set of curves to abstract essential motion features as a set of explicit 3D representations.Such abstraction enables an understanding of prominent components of movements while maintaining robustness to environmental factors.Our approach allows direct analysis of 3D motions from video, tackling the uncertainty that typically occurs when translating real-world motion into recorded footage.

Live content is unavailable. Log in and register to view live content