DDSF: Robust Few-Shot Learning via Disentangled Subspaces with Determinantal Point Process
Abstract
The performance of mean-based prototypical methods in few-shot learning is frequently compromised by noise and hard positives, where entangled feature representations cause prototype instability. We present a novel ``Filter-Repair-Expand'' framework grounded in Determinantal Point Process (DPP) theory. The method leverages DPP as its core logic, employing it to estimate sample confidence to filter anomalous samples from the initial set, guide a diffusion process via volume-maximization to enhance the sample representation, and subsequently maximize the volume of synergistic disentangled subspaces, constructing robust and diverse prototype subspaces. Experimental results establish new state-of-the-art performance on multiple benchmarks, demonstrating significant gains in few-shot learning robustness.