Skip to yearly menu bar Skip to main content


Poster

VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling

Zeyue Tian · Zhaoyang Liu · Ruibin Yuan · Jiahao Pan · Qifeng Liu · Xu Tan · Qifeng Chen · Wei Xue · Yike Guo


Abstract:

We present VidMuse, a simple framework for generating music aligned with video inputs. VidMuse stands out by producing high-fidelity music that is both acoustically and semantically aligned with the video. By incorporating local and global visual cues, VidMuse enables the creation of musically coherent audio tracks that seamlessly match the video content through Long-Short-Term modeling. Furthermore, we present a large-scale dataset comprising 360K video-music pairs encompassing various genres such as movie trailers, advertisements, and documentaries. Through extensive experiments, VidMuse outperforms existing models in terms of audio quality, diversity, and audio-visual alignment.

Live content is unavailable. Log in and register to view live content