Event-based Visual Deformation Measurement
Abstract
Visual Deformation Measurement (VDM) aims to recover dense deformation fields by tracking surface motion from camera observations. Traditional image-based methods rely on minimal inter-frame motion to constrain the correspondence search space, which limits their applicability to highly dynamic scenes or necessitates high-speed cameras at the cost of prohibitive storage and computational overhead.We propose an event-frame fusion framework that exploits events for temporally dense motion cues and frames for spatially dense precise estimation.By revisiting the solid elastic modeling prior, we propose an Affine Invariant Simplicial (AIS) framework that partitions the deformation field into multiple sub-regions and linearize the deformation within each sub-region using a low-parametric representation, effectively mitigating motion ambiguities arising from the sparse and noisy nature of event observations. To speed up parameter searching and reduce error accumulation, a neighborhood-greedy optimization strategy is introduced, enabling well-converged sub-regions to guide their poorly-converged neighbors, effectively suppress local error accumulation in long-term dense tracking.To evaluate the proposed method, a benchmark dataset with temporally aligned event streams and high-frame-rate videos is established, encompassing over 120 sequences spanning diverse deformation scenarios. Experimental results show that the proposed method outperforms the state-of-the-art baseline by 1.6× in terms of continuous measurement success rate (survival rate). Remarkably, our approach achieves superior performance while requiring only 18.9\% of the data storage and processing resources compared to traditional high-speed video-based methods, without compromising accuracy.