Poster
VinaBench: Benchmark for Faithful and Consistent Visual Narratives
Silin Gao · Sheryl Mathew · Li Mi · Sepideh Mamooler · Mengjie Zhao · Hiromi Wakaki · Yuki Mitsufuji · Syrielle Montariol · Antoine Bosselut
Visual narrative generation transforms textual narratives into sequences of images illustrating the content of the text. However, generating visual narratives that are faithful to the input text and self-consistent across generated images remains an open challenge, due to the lack of knowledge constraints used for planning the stories. In this work, we propose a new benchmark, VinaBench, to address this challenge. Our benchmark annotates the underlying commonsense and discourse constraints in visual narrative samples, offering systematic scaffolds for learning the implicit strategies of visual storytelling. Based on the incorporated narrative constraints, we further propose novel metrics to closely evaluate the consistency of generated narrative images and the alignment of generations with the input textual narrative. Our results across three generative vision models demonstrate that learning with our VinaBench's knowledge constraints effectively improves the faithfulness and cohesion of generated visual narratives.
Live content is unavailable. Log in and register to view live content