BALM: A Model-Agnostic Framework for Balanced Multimodal Learning under Imbalanced Missing Rates
Abstract
Learning from multiple modalities often suffers from imbalance, where information-rich modalities dominate optimization while weaker or partially missing modalities contribute less. This imbalance becomes severe in realistic settings with imbalanced missing modalities (IMR), where each modality is absent with different probabilities, distorting representation learning and gradient dynamics. We revisit this issue from a training-process perspective and propose BALM, a model-agnostic plug-in framework to achieve balanced multimodal learning under IMR. The framework consists of two complementary modules. The Feature Calibration Module (FCM) operates at the representation level, recalibrating unimodal features through global contextual information to build a shared representation basis across heterogeneous missing patterns. The Gradient Rebalancing Module (GRM) works at the optimization level, equalizing learning dynamics across modalities by modulating gradient magnitudes and directions from distributional and spatial perspectives. BALM can be seamlessly integrated into diverse backbones, including multimodal emotion recognition (MER) models, without altering their architectures. Experimental results across multiple MER benchmarks confirm that BALM consistently enhances robustness and improves performance under diverse missing and imbalance settings.