Skip to yearly menu bar Skip to main content


EvDiG: Event-guided Direct and Global Components Separation

xinyu zhou · Peiqi Duan · Boyu Li · Chu Zhou · Chao Xu · Boxin Shi

Arch 4A-E Poster #213
[ ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT
Oral presentation: Orals 3C Medical and Physics-based vision
Thu 20 Jun 9 a.m. PDT — 10:30 a.m. PDT


Separating the direct and global components of a scene aids in shape recovery and basic material understanding. Conventional methods capture multiple frames under high frequency illumination patterns or shadows, requiring the scene to keep stationary during the image acquisition process. Single-frame methods simplify the capture procedure but yield lower-quality separation results. In this paper, we leverage the event camera to facilitate the separation of direct and global components, enabling video-rate separation of high quality. In detail, we adopt an event camera to record rapid illumination changes caused by the shadow of a line occluder sweeping over the scene, and reconstruct the coarse separation results through event accumulation. We then design a network to resolve the noise in the coarse separation results and restore color information. A real-world dataset is collected using a hybrid camera system for network training and evaluation. Experimental results show superior performance over state-of-the-art methods.

Live content is unavailable. Log in and register to view live content