Poster
Unbiased Video Scene Graph Generation via Visual and Semantic Dual Debiasing
Yanjun Li · Zhaoyang Li · Honghui Chen · li'Zhi Xu
Video Scene Graph Generation (VidSGG) aims to capture dynamic relationships among entities by sequentially analyzing video frames and integrating visual and semantic information. However, VidSGG is challenged by significant biases that skew predictions. To mitigate these biases, we propose a \textbf{VI}sual and \textbf{S}emantic \textbf{A}wareness (VISA) framework for unbiased VidSGG. VISA addresses visual bias through an innovative memory update mechanism that enhances object representations and concurrently reduces semantic bias by iteratively integrating object features with comprehensive semantic information derived from triplet relationships. This visual-semantics dual debiasing approach results in more unbiased representations of complex scene dynamics. Extensive experiments demonstrate the effectiveness of our method, where VISA outperforms existing unbiased VidSGG approaches by a substantial margin (e.g., +13.1\% improvement in mR@20 and mR@50 for the SGCLS task under Semi Constraint).
Live content is unavailable. Log in and register to view live content