Poster
Locality-Aware Zero-Shot Human-Object Interaction Detection
Sanghyun Kim · Deunsol Jung · Minsu Cho
[
Abstract
]
Abstract:
Recent methods for zero-shot Human-Object Interaction (HOI) detection typically leverage the generalization ability of large Vision-Language Model (VLM), CLIP, on unseen categories, showing impressive results on various zero-shot settings.However, existing methods struggle to adapt CLIP representations for human-object pairs, as CLIP tends to overlook fine-grained information necessary for distinguishing interactions.To address this issue, we devise, LAIN, a novel zero-shot HOI detection framework enhancing the locality and interaction awareness of CLIP representations.The locality awareness, which involves capturing fine-grained details and the spatial structure of individual objects, is achieved by aggregating the information and spatial priors of adjacent neighborhood patches.The interaction awareness, which involves identifying whether and how a human is interacting with an object, is achieved by capturing the interaction pattern between the human and the object.By infusing locality and interaction awareness into CLIP representation, LAIN captures detailed information about the human-object pairs.Our extensive experiments on existing benchmarks show that LAIN outperforms previous methods on various zero-shot settings, demonstrating the importance of locality and interaction awareness for effective zero-shot HOI detection.
Live content is unavailable. Log in and register to view live content