AMusE: Audio-Visual Benchmark and Alignment Framework for Agentic Multi-Speaker Understanding
Sanjoy Chowdhury ⋅ Karren Dai Yang ⋅ Xudong Liu ⋅ Fartash Faghri ⋅ Pavan Kumar Anasosalu Vasu ⋅ Oncel Tuzel ⋅ Dinesh Manocha ⋅ Chun-Liang Li ⋅ Raviteja Vemulapalli
Abstract
Recent multimodal large language models (MLLMs) such as GPT-4o and Qwen3-Omni show strong perception but struggle in multi-speaker, dialogue-centric settings that demand agentic reasoning tracking who speaks, maintaining roles, and grounding events across time. These scenarios are central to multimodal audio-video understanding, where models must jointly reason over audio and visual streams in applications such as conversational video assistants and meeting analytics. We introduce $AMusE$, a benchmark designed around tasks that are inherently agentic, requiring models to decompose complex audio-visual interactions into planning, grounding, and reflection steps. It evaluates MLLMs across three modes zero-shot, guided, and agentic and six task families, including spatio-temporal speaker grounding and multimodal dialogue summarization. Across all modes, current models exhibit weak multi-speaker reasoning and inconsistent behavior under both non-agentic and agentic evaluation. Motivated by the inherently agentic nature of these tasks and recent advances in LLM agents, we propose $RAFT$, a data-efficient agentic alignment framework that integrates reward optimization with intrinsic multimodal self-evaluation as reward and selective parameter adaptation for data and parameter efficient updates. Using $RAFT$, we achieve up to $39.52\%$ relative improvement in accuracy on our benchmark. Together, $AMusE$ and $RAFT$ provide a practical platform for examining agentic reasoning in multimodal models and improving their capabilities. To facilitate further research we will publicly release our code and benchmark.
Successful Page Load