Poster
Enhancing Testing-Time Robustness for Trusted Multi-View Classification in the Wild
Wei Liu · Yufei Chen · Xiaodong Yue
Trusted multi-view classification (TMVC) addresses variations in data quality by evaluating the reliability of each view based on prediction uncertainty at the evidence level, reducing the impact of low-quality views commonly encountered in real-world scenarios. However, existing TMVC methods often struggle to maintain robustness during testing, particularly when integrating noisy or corrupted views. This limitation arises because the evidence collected by TMVC may be unreliable, frequently providing incorrect information due to complex view distributions and optimization challenges, ultimately leading to classification performance degradation. To enhance the robustness of TMVC methods in real-world conditions, we propose a generalized evidence filtering mechanism that is compatible with various fusion strategies commonly used in TMVC, including Belief Constraint Fusion, Aleatory Cumulative Belief Fusion, and Averaging Belief Fusion. Specifically, we frame the identification of unreliable evidence as a multiple testing problem and introduce p-values to control the risk of false identification. By selectively down-weighting unreliable evidence during testing, our mechanism ensures robust fusion and mitigates performance degradation. Both theoretical guarantees and empirical results demonstrate significant improvements in the classification performance of TMVC methods, supporting their reliable application in challenging, real-world environments.
Live content is unavailable. Log in and register to view live content