Dual-Prototype-Guided Multi-task Learning for Unsupervised Anomaly Detection and Classification
Abstract
Unsupervised Anomaly Detection (UAD) and anomaly classification are frequently used in industrial and medical scenarios. Specifically, UAD identifies anomalous regions at the fine-grained pixel-level, while anomaly classification distinguishes anomaly types at the anomaly region level. However, existing approaches typically treat these tasks independently and sequentially, overlooking the benefits of jointly training them to suppress Local Visual Ambiguity (LVA) caused by the similarities of different types of anomalies in local visual patterns. Moreover, a multi-task learning framework cannot be directly applied to jointly train the two tasks, since UAD and anomaly classification exhibit feature preference incompatibility. To address these limitations, we propose the Prototype-Guided Semi-Supervised Feature Disentanglement (PG-SFD) framework, which makes a paradigm shift from implicit feature sharing to explicit feature disentanglement and explicitly constructs normal and category prototypes to eliminate implicit normal-abnormal semantic coupling via a Dual-Prototype Disentanglement Module (DPRM). Moreover, for cross-task feature differential injection and gradient conflict mitigation, the Differential Gated Interaction (DGI) and Geometry-Regularized Optimization (GRO) are proposed to form a cohesive framework with DPRM. PG-SFD demonstrates high effectiveness in both UAD tasks and weakly supervised classification tasks. Meanwhile, it exhibits stable performance across multiple types of datasets, including industrial and medical datasets, indicating its strong generalizability.