Towards Reasoning-Preserving Unlearning in Multimodal Large Language Models
Abstract
Machine unlearning aims to erase requested data from trained models without full retraining. For Reasoning Multimodal Large Language Models (RMLLMs), this is especially challenging: intermediate chain-of-thought steps can still leak sensitive information even when final answers are forgotten, and aggressive interventions easily damage general reasoning ability. However, existing benchmarks do not jointly evaluate how well unlearning methods suppress reasoning-level leakage while preserving reasoning competence. We address this gap with RMLLMU-Bench, the first benchmark for RMLLM unlearning that extends standard forgetting metrics with explicit reasoning traces and dedicated measures of reasoning leakage and reasoning retention. A systematic evaluation on RMLLMU-Bench shows that current unlearning methods for MLLMs and Large (Language) Reasoning Models (LRMs) either leave substantial leakage in the reasoning process or severely degrade reasoning performance. To overcome these limitations, we propose R-MUSE (Reasoning-preserving MLLM Unlearning via Subspace guidance and Adaptive Steering), a training-free, inference-time intervention framework that steers internal representations to forget both answers and reasoning traces while explicitly preserving general reasoning ability. Experiments on RMLLMU-Bench demonstrate that R-MUSE achieves a substantially better balance between effective forgetting and reasoning retention than existing approaches. Our code and data will be released upon acceptance.