Black-Box Domain Adaptation for Object Detection with Retention-Driven Knowledge Compression
Abstract
Black-Box Domain Adaptation (BBDA) is a highly practical yet challenging strategy that enables the deployment of pre-trained detectors to new unlabeled target domains without accessing source data or models. Compared to previous domain adaptation studies, BBDA not only provides stronger data privacy protection but also offers greater portability. Despite growing interest, existing BBDA strategies remain difficult to apply directly to object detection, as most prior works focus on classification and segmentation tasks that do not involve bounding box localization and rely on different learning mechanisms. In this paper, inspired by lifelong learning, we propose Retention-Driven Knowledge Compression (RDKC), which applies a brain-inspired continual learning process to BBDA for object detection. Specifically, RDKC consists of two key components: Memory Retention (MR) and Scene Compression (SC). MR is designed specifically for object detection under the BBDA setting, where it performs memorized contrastive learning on partitioned regions to better utilize informative cues from reliable areas while filtering out potential noise from noise prediction labels. SC introduces a contrastive mechanism between near- and far-view regions, which enables the model to better learn from far-view regions under the guidance of near-view cues. Experimental results demonstrate that under the BBDA setting, RDKC outperforms previous SOTA methods across all evaluated benchmarks, achieving superior performance improvements.