Skip to yearly menu bar Skip to main content


Poster

A-Cap: Anticipation Captioning With Commonsense Knowledge

Duc Minh Vo · Quoc-An Luong · Akihiro Sugimoto · Hideki Nakayama

West Building Exhibit Halls ABC 247

Abstract:

Humans possess the capacity to reason about the future based on a sparse collection of visual cues acquired over time. In order to emulate this ability, we introduce a novel task called Anticipation Captioning, which generates a caption for an unseen oracle image using a sparsely temporally-ordered set of images. To tackle this new task, we propose a model called A-CAP, which incorporates commonsense knowledge into a pre-trained vision-language model, allowing it to anticipate the caption. Through both qualitative and quantitative evaluations on a customized visual storytelling dataset, A-CAP outperforms other image captioning methods and establishes a strong baseline for anticipation captioning. We also address the challenges inherent in this task.

Chat is not available.