MSRL: Scaling Generative Multimodal Reward Modeling via Multi-Stage Reinforcement Learning
Chenglong Wang ⋅ Yifu Huo ⋅ Yang Gan ⋅ Qiaozhi He ⋅ Qi Meng ⋅ Bei Li ⋅ Yan Wang ⋅ Junfu Liu ⋅ Tianjua Zhou ⋅ JingBo Zhu ⋅ Tong Xiao
Abstract
Recent advances in multimodal reward modeling have been largely driven by a paradigm shift from discriminative to generative approaches. Building on this progress, recent studies have further employed reinforcement learning with verifiable rewards (RLVR) to enhance multimodal reward models (MRMs). Despite their success, RLVR-based training typically depends on labeled multimodal preference data, which are costly and labor-intensive to obtain, making it difficult to scale the training of MRMs. To overcome this limitation, we propose a Multi-Stage Reinforcement Learning (MSRL) approach, which can achieve scalable reinforcement learning for MRMs with limited multimodal data. MSRL redefines the conventional RLVR-based training paradigm by first learning a generalizable reward reasoning capability from large-scale textual preference data and then progressively transferring this capability to multimodal tasks through caption-based and fully multimodal reinforcement learning stages. Furthermore, we introduce a cross-modal knowledge distillation approach to improve preference generalization within MSRL. Extensive experiments demonstrate that MSRL effectively scales the RLVR-based training of generative MRMs and substantially improves their performance across both visual understanding and visual generation tasks (e.g., 68.5\%$\rightarrow$74.8\% on VLReward Bench, 69.2\%$\rightarrow$75.4\% on GenAI-Bench), without requiring additional multimodal preference annotations.
Successful Page Load