Skip to yearly menu bar Skip to main content


Poster

One-Shot Open Affordance Learning with Foundation Models

Gen Li · Deqing Sun · Laura Sevilla-Lara · Varun Jampani


Abstract:

We introduce One-shot Open Affordance Learning (OOAL), where a model is trained with just one example per base object category, but is expected to identify novel objects and affordances. While vision-language models excel at recognizing novel objects and scenes, they often struggle to understand finer levels of granularity such as affordances. To handle this issue, we conduct a comprehensive analysis of existing foundation models, to explore their inherent understanding of affordances and assess the potential for data-limited affordance learning. We then propose a vision-language framework with simple and effective designs that boost the alignment between visual features and affordance text embeddings. Experiments on two affordance segmentation benchmarks show that the proposed method outperforms state-of-the-art models with less than 1\% of the full training data, and exhibits reasonable generalization capability on unseen objects and affordances.

Live content is unavailable. Log in and register to view live content