Poster
Learning Endogenous Attention for Incremental Object Detection
Xiang Song · Yuhang He · Jingyuan Li · Qiang Wang · Yihong Gong
In this paper, we focus on a challenging Incremental Object Detection (IOD) problem. Existing IOD methods follow an image-to-annotation alignment paradigm, which attempts to complete the annotations for old categories and subsequently learns both new and old categories in new tasks. This paradigm inherently introduces missing/redundant/inaccurate annotations of old categories, resulting in a suboptimal performance. Instead, we propose a novel annotation-to-instance alignment IOD paradigm and develop a corresponding method named Learning Endogenous Attention (LEA). Inspired by the human brain, LEA enables the model to focus on annotated task-specific objects, while ignoring irrelevant ones, thus solving the annotation incomplete problem in IOD. Concretely, our LEA consists of Endogenous Attention Modules (EAMs) and an Energy-based Task Modulator (ETM). During training, we add the dedicated EAMs for each new task and train them to focus on the new categories. During testing, ETM predicts task IDs using energy functions, directing the model to detect task-specific objects. The detection results corresponding to all task IDs are combined as the final output, thereby alleviating the catastrophic forgetting of old knowledge. Extensive experiments on COCO 2017 and Pascal VOC 2007 demonstrate the effectiveness of our method.
Live content is unavailable. Log in and register to view live content