Hunting Normality from Query Sample via Residual Learning for Generalist Anomaly Detection; Jimin Xiao
Abstract
Generalist Anomaly Detection (GAD) seeks to overcome the domain-specific limitations of traditional anomaly detection by training a unified model that can generalize to unseen classes. A promising GAD strategy involves using residual features to create a class-invariant space. However, existing methods that directly model the distribution of residuals face unpredictable risks: there is inconsistency between residual and instance features, i.e., subtle defects may yield small residuals (false negatives), or normal feature residuals could be large due to the diversity of normality (false positives). To address these limitations, we propose a novel residual-based learning framework that re-purposes residuals as a guide to learn instance-level normality, rather than modeling their distribution directly. Our framework features two new attention-based modules: Residual Feature Learning (RFL), which uses learnable proxies to capture diverse patterns from the residual features, and Normality Learning from Support (NLS), which leverages these residual proxies to aggregate query-related normality proxies from the support instance features. These dynamically generated normality proxies are then used to hunt for normality within the query patch features, enabling accurate anomaly localization. Extensive experiments on GAD benchmarks demonstrate the effectiveness of our method. The code will be made publicly available.