SATTC: Structure-Aware Label-Free Test-Time Calibration for Cross-Subject EEG-to-Image Retrieval
Abstract
Cross-subject EEG-to-image retrieval for visual decoding is hampered by subject shift and hubness in the embedding space, which distort similarity geometry and destabilize top-k rankings, making small candidate shortlists unreliable. We introduce SATTC (Structure-Aware Test-Time Calibration), a label-free test-time calibration head that operates directly on the similarity matrix of frozen EEG–image encoders. SATTC combines a geometric expert—subject-adaptive whitening of EEG embeddings with an adaptive variant of Cross-domain Similarity Local Scaling (CSLS)—and a structural expert that leverages mutual nearest neighbors, bidirectional top-k ranks, and class popularity, fused via a simple Product-of-Experts rule. On the THINGS-EEG cross-subject benchmark with a strict leave-one-subject-out protocol, standardizing inference with cosine similarities, ℓ2-normalized embeddings, and candidate whitening already yields a strong baseline that improves Top-1 and Top-5 accuracy over the original ATM retrieval setup. Adding SATTC on top of this standardized inference further improves Top-1 and Top-5 accuracy and substantially reduces hubness, yielding more reliable small-k shortlists across multiple EEG encoders and establishing SATTC as a generic test-time calibration head for zero-shot neural decoding from EEG.