Poster
ChatGen: A Unified Model for Interactive Multimodal Generation as We Chat
Zhipeng Huang · Shaobin Zhuang · Canmiao Fu · Binxin Yang · Ying Zhang · Chong Sun · Chen Li · Yali Wang · Zhizheng Zhang · Zheng-Jun Zha
Existing multimodal generative models fall short as qualified design copilots, as they often struggle to generate imaginative outputs once instructions are less detailed or lack the ability to maintain consistency with the provided references. In this work, we introduce ChatGen, a model that unifies multimodal generation and understanding, and promotes their interplay in iterative generation. It can generate diverse results with high creativity for less detailed instructions. And it can progressively refine prior generation results or integrating specific contents from references following the instructions in its chat with users. During this process, it is capable of preserving consistency in the parts that the user is already satisfied with. To this end, we curate a large-scale dataset, extracted from Internet videos, containing rich object dynamics and auto-labeled dynamics descriptions by advanced foundation models to date. These two information are interleaved into a single sequence to enable ChatGen to learn consistency-aware generation where the specified dynamics are generated while the consistency of unspecified content is preserved aligned with instructions. Besides, we introduce a prompt self-rewriting mechanism to enhance generation diversity. Extensive experiments demonstrate the effectiveness of unifying multimodal understanding and generation in ChatGen and show it achieves state-of-the-art performance across various visual generation benchmarks. These also demonstrate the potential of ChatGen as a user-friendly design copilot as desired.
Live content is unavailable. Log in and register to view live content