PET-DINO: Unifying Visual Cues into Grounding DINO with Prompt-Enriched Training
Abstract
Open-Set Object Detection (OSOD) enables recognition of novel categories beyond fixed classes but faces challenges in aligning text representations with complex visual concepts and the scarcity of image-text paired samples for rare categories. This results in suboptimal performance in specialized domains or with complex objects. Recent visual-prompted methods partially address these issues but often involve complex multi-modal designs and multi-stage optimizations, extending the development cycle. Additionally, effective training strategies for data-driven OSOD models remain largely unexplored. To address these challenges, we propose PET-DINO, a universal object detector supporting both text and visual prompts. Our visual prompt generation scheme builds on an advanced text-prompted detector, addressing the limitations of text representation guidance and reducing the development cycle. We introduce two prompt-enriched training strategies: Intra-Batch Parallel Prompting (IBP) at the iteration level and Dynamic Memory-Driven Prompting (DMD) at the overall training level. These strategies enable simultaneous modeling of multiple prompt routes, parallel alignment with diverse real-world usage scenarios, and improved classification. Extensive experiments demonstrate that our visual prompt generation scheme, based on text-prompt-based detection pretraining, achieves a higher performance ceiling compared to using visual prompts alone.Our method achieves significant zero-shot detection performance on COCO, LVIS, and ODinW, and excels across various prompt-based detection protocols. In-domain evaluations also demonstrate robust localization performance.